TURBOMOLE Modules > Ridft, Rdgrad, Dscf, Grad
Severe Error during Ridft HFC Calculations
(1/1)
jcardol:
Dear Turbomole Users,
I'm running into an error when using Turbomole 7.7 ridft module alongside x2c method to calculate HFCC (HyperFine Coupling Constants) while employing the MPI version.
The error is:
--- Code: ---SEVERE ERROR from node: 0 parallel densao, flag.eq.2 untested
--- End code ---
I manually setup the control file through define to write this input:
--- Code: ---$title
sctest
$symmetry c1
$coord file=coord
$optimize
internal off
redundant off
cartesian on
$atoms
basis =x2c-QZVPPall
$basis file=basis
$ecp file=basis
$uhfmo_alpha file=alpha
$uhfmo_beta file=beta
$uhf
$alpha shells
a 1-60 ( 1 )
$beta shells
a 1-59 ( 1 )
$scfiterlimit 300
$scfdamp start=1.000 step=0.050 min=0.100
$scfdump
$scfdiis
$maxcor 500 MiB per_core
$energy file=energy
$grad file=gradient
$rx2c
$finnuc
$dft
functional b3-lyp
gridsize 5a
$scfconv 9
$scforbitalshift closedshell=.05
$rundimensions
natoms=6
$last step define
$end
--- End code ---
To then run a script which launches the calculations:
--- Code: ---#!/bin/tcsh
#$ -S /bin/csh
#$ -N sctest
#$ -pe smp 16
#$ -cwd
#Set Path
set dir = $cwd
source /etc/profile.d/modules.csh
module load turbomole/7.7_mpi
setenv PARA_ARCH MPI
setenv PARNODES 16
setenv OMP_NUM_THREADS 1
cp * $TMPDIR/
cd $TMPDIR
#Execute job
nohup dscf -np 16 > output
cp -pr * $dir
/aplic/turbomole/turbomole-7.7/TURBOMOLE/scripts/gtensprep.sh -msnso -hfc
cd x/
nohup ridft -np 16 > output
cd ../y/
nohup ridft -np 16 > output
cd ../z/
nohup ridft -np 16 > output
cd ../
cp -pr * $dir
exit
--- End code ---
The script seems to run successfully up until the first ridft, the dscf manages to compute it correctly, but another message from the outputs was declaring 'Missing spinor shell occupation number declaration!' due to the fact that no spinor.i nor spinor.r files were created during the process is what I'm guessing but I don't know why.
Any help would be greatly appreciated. :D
Best regards!
Joan
uwe:
Hello,
you are running Turbomole in an SMP environment, not using more than one node. So it is usually more efficient not to use the MPI version but the SMP one.
1. Please set in your tcsh script instead of:
--- Code: ---module load turbomole/7.7_mpi
setenv PARA_ARCH MPI
setenv PARNODES 16
setenv OMP_NUM_THREADS 1
--- End code ---
this here:
--- Code: ---module load turbomole/7.7_mpi
setenv PARA_ARCH SMP
setenv PARNODES 16
source $TURBODIR/Config_turbo_env.csh
--- End code ---
This will start the SMP versions of the Turbomole modules instead of the MPI ones.
2. Next: Your input
Please add
--- Code: ---$rij
--- End code ---
to the control file. Two-component calculations are all done with the module ridft (not dscf), thus $rij is required in the control file.
3. Finally replace the call
--- Code: ---nohup dscf -np 16 > output
--- End code ---
by
--- Code: ---ridft > output
--- End code ---
the -np option can be skipped as you already set $PARNODES to the number of cores you want to use for the calculation. nohup is not required within a script.
Hope this helps
jcardol:
Hi!
Thanks for your advice and insights Uwe :D,
1. I've changed the tcsh script code for the one you have written.
2. I follow the Turbomole 7.7 User Guide (Section 18.1) where it starts with a UHF/UKS calculation employing the X2C ($rx2c) and finite nucleus model ($finnuc). So I ran a dscf and afterwards through the use of gtensprep.sh a ridft calc was prepared and sent for each vector (x,y and z), so I guess these would be two consecutive ridft calculations then?
3. I've implemented your cleaner code changes to the script as well.
I've run a test with your changes in the script and works perfectly.
Thank you for your time! :D
Joan
Navigation
[0] Message Index
Go to full version