Author Topic: Geometry optimization with "jobex" fails during the relaxation step [SOLVED]  (Read 44708 times)

Matteo Guglielmi

  • Jr. Member
  • **
  • Posts: 20
  • Karma: +0/-0
It's an closed-shell RI-MP2 geometry optimization on the ground state using the parallelized ricc2 module.

Here you find the way I define the control file & run jobex:

Code: [Select]
define
####################################

TRPH RUN04: RIMP2/def2-QZVPP
a coord
ired
*
b all def2-QZVPP
*
eht
y
1
y
scf
iter
100
ints
y
5000 /scratch/matteo/turboscr/twoint
*

cc2
cbas
*
memory 1800
tmpdir
/scratch/matteo/turboscr/ricc2
freeze
*
ricc2
maxiter 100
geoopt mp2 x
*
*
*
####################################
stati pdscf
dscf
jobex -level cc2 -c 100 > jobex.out &

Basically jobex performs the dscf step correctly but
the relax step gives the following error message:

Code: [Select]
------------------------------------------------------------------------------

     relaxation of NUCLEAR COORDINATES in delocalized coordinates

 ------------------------------------------------------------------------------


  max. nb. of iterations for internal --> cartesian  :   25
  convergence criterion for internal coordinates     :  0.10E-06
 reading data block $coord from file <coord>


 <getgrd> : data group $grad  is missing


 cannot find any information which may be used to optimize geometry ...


 MODTRACE: no modules on stack

  so long GRANAT !
 relax ended abnormally

Any help is appreciated.

PS: the "gradient" file is also empty!

« Last Edit: July 02, 2008, 03:29:59 PM by quantumwire »

antti_karttunen

  • Sr. Member
  • ****
  • Posts: 227
  • Karma: +1/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #1 on: July 01, 2008, 01:40:11 PM »
Hi,

considering that the gradient file is empty, the problem most likely occurs in the ricc2 module. What does ricc2 print in job.last or job.1?

christof.haettig

  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 291
  • Karma: +0/-0
    • Hattig's Group at the RUB
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #2 on: July 01, 2008, 06:59:54 PM »
Well, if what you list above is ALL you did to prepare your inputs, you cann't use them for (RI-)MP2 calculations.
For conventional MP2 calculations with mpgrad you have to run mp2prep, for (RI-)MP2 calculations with rimp2 or ricc2 you have to specify the auxiliary basis sets. Please consult the docu or the tutorial for the detailed description how to do this.

Christof

Matteo Guglielmi

  • Jr. Member
  • **
  • Posts: 20
  • Karma: +0/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #3 on: July 01, 2008, 10:25:53 PM »
Well, if what you list above is ALL you did to prepare your inputs, you cann't use them for (RI-)MP2 calculations...

So... the job.1 file says:

Code: [Select]
OPTIMIZATION CYCLE 1
Tue Jul  1 19:52:42 CEST 2008
error in relax step

the relax.tmpout files says:
Code: [Select]
------------------------------------------------------------------------------

     relaxation of NUCLEAR COORDINATES in delocalized coordinates

 ------------------------------------------------------------------------------


  max. nb. of iterations for internal --> cartesian  :   25
  convergence criterion for internal coordinates     :  0.10E-06
 reading data block $coord from file <coord>
 

 <getgrd> : data group $grad  is missing


 cannot find any information which may be used to optimize geometry ...

... and the gradient file is still empy.


BTW "Uwe" suggested me to prepare the input file in that way in order
to use the parallel version of ri-mp2 implemented in the ricc2_mpi module
without using mp2prep.

Here is the last control file prepared by 'jobex -level cc2 -c 100 > jobex.out &':

Code: [Select]
$title
TRPH RUN09: RIMP2/def2-QZVPP
$operating system unix
$symmetry c1
$redundant    file=coord
$coord    file=coord
$user-defined bonds    file=coord
$atoms
h  1-17                                                                        \
   basis =h def2-QZVPP                                                         \
   cbas  =h def2-QZVPP
n  18-19                                                                       \
   basis =n def2-QZVPP                                                         \
   cbas  =n def2-QZVPP
c  20-30                                                                       \
   basis =c def2-QZVPP                                                         \
   cbas  =c def2-QZVPP
o  31-34                                                                       \
   basis =o def2-QZVPP                                                         \
   cbas  =o def2-QZVPP
$basis    file=basis
$rundimensions
   dim(fock,dens)=1661002
   natoms=34
   nshell=459
   nbf(CAO)=1819
   nbf(AO)=1479
   dim(trafo[SAO<-->AO/CAO])=2516
   rhfshells=1
$scfmo   file=mos
$closed shells
 a       1-64                                   ( 2 )
$scfiterlimit      100
$scfconv        7
$thize     0.10000000E-04
$thime        5
$scfdamp   start=0.300  step=0.050  min=0.100
$scfdump
$scfintunit
 unit=30       size=5000    file=/scratch/matteo/turboscr/twoint
$scfdiis
$scforbitalshift  automatic=.1
$drvopt
   cartesian  on
   basis      off
   global     off
   hessian    on
   dipole     on
   nuclear polarizability
$interconversion  off
   qconv=1.d-7
   maxiter=25
$optimize
   internal   on
   redundant  on
   cartesian  off
   global     off
   basis      off   logarithm
$coordinateupdate
   dqmax=0.3
   interpolate  on
   statistics    5
$forceupdate
   ahlrichs numgeo=0  mingeo=3 maxgeo=4 modus=<g|dq> dynamic fail=0.3
   threig=0.005  reseig=0.005  thrbig=3.0  scale=1.00  damping=0.0
$forceinit on
   diag=default
$energy    file=energy
$grad    file=gradient
$forceapprox    file=forceapprox
$lock off
$maxcor     6000
$denconv     0.10000000E-06
$cbas    file=auxbasis
$tmpdir /scratch/matteo/turboscr/ricc2/
$freeze
 implicit core=   17 virt=    0
$ricc2
  mp2
  maxiter=  100
  geoopt model=mp2       state=(x)
$actual step      relax
$statistics  off
$parallel_parameters maxtask = 1000
$2e-ints_shell_statistics    file=metastase
$parallel_platform MPP
$numprocs  8
$orbital_max_rnorm 0.11996158672669E-04
$last SCF energy change = -835.02173
$dipole from dscf
  x     1.14628373514987    y     0.70407403258607    z    -2.27572187187357    a.u.
   | dipole | =    6.7194000807  debye
$SHAREDTMPDIR
$end

« Last Edit: July 01, 2008, 10:32:47 PM by quantumwire »

antti_karttunen

  • Sr. Member
  • ****
  • Posts: 227
  • Karma: +1/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #4 on: July 02, 2008, 07:28:09 AM »
What about job.last? What is the last thing ricc2 says?

Matteo Guglielmi

  • Jr. Member
  • **
  • Posts: 20
  • Karma: +0/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #5 on: July 02, 2008, 11:40:17 AM »
What about job.last? What is the last thing ricc2 says?

job.last:

Code: [Select]
           atomic orbital partitioning statistics
           ----------------------------------------------------------
           node       first shell       first bf         block length
           ----------------------------------------------------------
             1                1                1               180
             2              181              181               181
             3              246              362               183
             4              307              545               185
             5              344              730               185
             6              381              915               188
             7              411             1103               189
             8              438             1292               188
           ----------------------------------------------------------


 setting up bound for integral derivative estimation

 increment for numerical differentiation : 0.00050000

 biggest AO integral is expected to be     6.990327543
 biggest cartesian 1st derivative AO integral is expected to be    14.574905456

 cpu time for 2e-integral derivative bound :     35.63 sec


   threshold for RMS(d[D]) in SCF was     :  0.10E-06
   integral neglect threshold             :  0.18E-11
   derivative integral neglect threshold  :  0.10E-07



   67650 out of   105570 shell pairs give nonnegligible 2e- integrals.
   we have thus a screening ratio of  35.919 %
      distribution of integral shell pairs:
                 > 0.10E+01          249       0.236
       0.10E-01 -- 0.10E+01        22401      21.219
       0.10E-03 -- 0.10E-01        16562      15.688
       0.10E-05 -- 0.10E-03        10607      10.047
       0.10E-07 -- 0.10E-05         7600       7.199
       0.10E-09 -- 0.10E-07         5960       5.646
       0.10E-11 -- 0.10E-09         4842       4.587
       0.10E-13 -- 0.10E-11         3738       3.541
       0.10E-15 -- 0.10E-13        33611      31.838



 total memory allocated for calculation of (Q|P)**(-1/2) : 129 MiB


     calculation of (P|Q) ...
fine, there is no data group "$actual step"
next step = ricc2

uwe

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 558
  • Karma: +0/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #6 on: July 02, 2008, 12:07:48 PM »
Hi,

what about your user limits? Could you check what

ulimit -a

for bash/ksh/sh, or

limit

for csh/tcsh

says?

Uwe

antti_karttunen

  • Sr. Member
  • ****
  • Posts: 227
  • Karma: +1/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #7 on: July 02, 2008, 12:53:27 PM »

job.last:

Code: [Select]

 total memory allocated for calculation of (Q|P)**(-1/2) : 129 MiB


     calculation of (P|Q) ...
fine, there is no data group "$actual step"
next step = ricc2

OK, so the problem occured in ricc2. I have noticed that parallel ricc2 sometimes does not print the final error message in slave1.output file, but in some other slaveN.output file. So it might also be worthwhile to check other slaveN.output files to see if you can find out the final error message. However, if the problem is due to stack size limits like Uwe suggested, there won't probably be any additional messages.

Matteo Guglielmi

  • Jr. Member
  • **
  • Posts: 20
  • Karma: +0/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #8 on: July 02, 2008, 01:42:17 PM »
Hi,

what about your user limits? Could you check what

ulimit -a

for bash/ksh/sh, or

limit

for csh/tcsh

says?

Uwe

Here we go:

Code: [Select]
[matteo@lcbcpc24 01-RIMP2]$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 77824
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Matteo Guglielmi

  • Jr. Member
  • **
  • Posts: 20
  • Karma: +0/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #9 on: July 02, 2008, 02:13:38 PM »
OK, so the problem occured in ricc2. I have noticed that parallel ricc2 sometimes does not print the final error message in slave1.output file, but in some other slaveN.output file. So it might also be worthwhile to check other slaveN.output files to see if you can find out the final error message. However, if the problem is due to stack size limits like Uwe suggested, there won't probably be any additional messages.

About the 8 output files:

The tails of the "slave[2-7].output" files do not differ at all from the slave1.output one (ricc2 module)
while "slave8.output" is still related to the successful completion of the (previous) dscf step.

Matteo Guglielmi

  • Jr. Member
  • **
  • Posts: 20
  • Karma: +0/-0
Re: Geometry optimization with "jobex" fails during the relaxation step
« Reply #10 on: July 02, 2008, 03:01:44 PM »
The problem was the limited stack size which can be easily set to "unlimited" with the following bash command:

Code: [Select]
ulimit -s unlimited

Thanks guys,
MG.
« Last Edit: July 02, 2008, 03:31:34 PM by quantumwire »

antti_karttunen

  • Sr. Member
  • ****
  • Posts: 227
  • Karma: +1/-0
Hello,

what TURBOMOLE version are you using? I remember we had similar issues with TM 5.9.1 (ricc2 did not write gradients), but the problem disappeared when we updated to 5.10.

janwahl

  • Jr. Member
  • **
  • Posts: 13
  • Karma: +0/-0
Re: Geometry optimization with "jobex" fails during the relaxation step [SOLVED]
« Reply #12 on: September 14, 2011, 11:51:06 AM »
Hello,
i just wanted to ask it this might be a problem of a parallel run. I ask this because we have the same error here with turbomole 6.0.2 doing a parallel geometry optimization on cc2 level and still no solution.

I would appreciate any help

best regards

Jan Wahl



EDIT:

problem solved!!! i just forgot to use cbas to create the auxilliar basis set

THX to the turbomole support!!!
« Last Edit: September 16, 2011, 01:31:54 PM by janwahl »