Author Topic: Finding transition state @ RI-MP2  (Read 8604 times)

ztehrani

  • Jr. Member
  • **
  • Posts: 16
  • Karma: +0/-0
Finding transition state @ RI-MP2
« on: April 16, 2015, 10:31:46 AM »
Dear Users:
Hi. I have a question. what should I do if I want to find TS @ ri-mp2? Is it available at this level or not? I found some discussion about how to find TS in TURBOMOLE but I have problem for writing script at this level.  it is worth to mentioning  that I have the initial structure of TS. As I found I should only insert this command in control file:
$statpt
 hssfreq 0
 itrvec 1
and then I think it may be something like this for calculating frequencies, read hessian, perform  TS optimization and calculate frequencies.:
NumForce>numforce.out
jobex -ri  -level cc2 -c 300 -energy 7 -gcart 4 -statpt -trans > ricc2-mpi.out
NumForce>numforce.out

in the submit script.

Any help will be appreciated.

Thanks in advance

Zahra

Arnim

  • Developers
  • Sr. Member
  • *
  • Posts: 253
  • Karma: +0/-0
Re: Finding transition state @ RI-MP2
« Reply #1 on: April 17, 2015, 10:20:41 AM »
Hi!

Of course, you can run a TS search with MP2.

Your NumForce call is wrong. Just calling NumForce without options will give you the HF Hessian.
You would have to start with 'NumForce -level cc2' or 'NumForce -level cc2 -c'.'

'NumForce -h' will give you the possible options.

Also, you would have to remove the numforce subdirectory before running NumForce again in the same directory.

Cheers,

Arnim

ztehrani

  • Jr. Member
  • **
  • Posts: 16
  • Karma: +0/-0
Re: Finding transition state @ RI-MP2
« Reply #2 on: April 19, 2015, 10:47:52 AM »
Dear Arnim,
Hi. Thank you so much for your reply. Therefore, for finding transition state for acrolein with SVP at RI-MP2 level, the control file and the scripts should be as follow. It will be appertained if you let me know these files are correct or if not which kind of corrections should be applied to these file.
Best Wishes
Zahra
Control file:
$title
$operating system unix
$symmetry c1
$redundant    file=coord
$statpt
 hssfreq 0
 itrvec 1
$coord    file=coord
$user-defined bonds    file=coord
$atoms
c  1-3                                                                         \
   basis =c SVP                                                                \
   cbas  =c SVP
o  4                                                                           \
   basis =o SVP                                                                \
   cbas  =o SVP
h  5-8                                                                         \
   basis =h SVP                                                                \
   cbas  =h SVP
$basis    file=basis
$rundimensions
   dim(fock,dens)=3336
   natoms=8
   nshell=36
   nbf(CAO)=80
   nbf(AO)=76
   dim(trafo[SAO<-->AO/CAO])=88
   rhfshells=1
$scfmo   file=mos
$closed shells
 a       1-15                                   ( 2 )
$scfiterlimit       300
$scfconv        7
$thize     0.10000000E-04
$thime        5
$scfdamp   start=0.300  step=0.050  min=0.100
$scfdump
$scfintunit
 unit=30       size=0        file=twoint
$scfdiis
$scforbitalshift  automatic=.1
$drvopt
   cartesian  on
   basis      off
   global     off
   hessian    on
   dipole     on
   nuclear polarizability
$interconversion  off
   qconv=1.d-7
   maxiter=25
$optimize
   internal   on
   redundant  on
   cartesian  off
   global     off
   basis      off   logarithm
$coordinateupdate
   dqmax=0.3
   interpolate  on
   statistics    5
$forceupdate
   ahlrichs numgeo=0  mingeo=3 maxgeo=4 modus=<g|dq> dynamic fail=0.3
   threig=0.005  reseig=0.005  thrbig=3.0  scale=1.00  damping=0.0
$forceinit on
   diag=default
$energy    file=energy
$grad    file=gradient
$forceapprox    file=forceapprox
$lock off
$maxcor    20000
$denconv     0.10000000E-06
$cbas    file=auxbasis
$freeze
 implicit core=    4 virt=    0
$ricc2
  mp2
  geoopt model=cc2       state=(x)
$last step     define
$end

Script.1 (subturb-rimp2-ts)
#!/bin/bash

`perl -pi -e "s/\r//" $1`
here=`pwd`
job=`basename $1 .dat`
proc=$2


sed 's/INPUT/'${job}'/g' /home/bin/turbo-rimp2-ts.sh > $here/${job}.cmd2
sed 's/NPROC/'${proc}'/g' $here/${job}.cmd2 > ${job}.cmd

qsub ${job}.cmd
########################################################################

echo ""
echo ""

Script 2. (turbo-rimp2-ts.sh)

#!/bin/bash

# pe request
#$ -pe mpi_NPROC NPROC

# our Job name
#$ -N INPUT

#$ -S /bin/bash

#$ -q all.q
#$ -cwd

WDIR=/work/$USER/INPUT.$$
mkdir -p $WDIR

DIR=`pwd`
cp * $WDIR
cd $WDIR

export PARA_ARCH=MPI
export TURBODIR=/opt/TURBOMOLE
export PATH=$PATH:$TURBODIR/bin/x86_64-unknown-linux-gnu_mpi:$TURBODIR/scripts:$TURBODIR/mpirun_scripts
export PARNODES=NPROC

NumForce -level cc2 > numforce.out
jobex -ri  -level cc2 -c 300 -energy 7 -gcart 4 -statpt -trans > ricc2-mpi.out
NumForce>numforce.out

cp -rf * $DIR;

rm -rf $WDIR



Arnim

  • Developers
  • Sr. Member
  • *
  • Posts: 253
  • Karma: +0/-0
Re: Finding transition state @ RI-MP2
« Reply #3 on: April 20, 2015, 10:28:01 AM »
Hi!

I can't really proofread all  that in detail.
But this:
  NumForce -level cc2 > numforce.out
  jobex -ri  -level cc2 -c 300 -energy 7 -gcart 4 -statpt -trans > ricc2-mpi.out
  NumForce>numforce.out
Would have to look more like this:
  NumForce -level cc2 > numforce.out
  jobex -ri  -level cc2 -c 300 -energy 7 -gcart 4 -statpt -trans > ricc2-mpi.out
  NumForce -level cc2 >numforce2.out

However, I would not recommend to run these 3 steps in one submit. It is better submit only the NumForce step and check, if everthing worked and visualise the imaginary mode. Then, jobex and visualise the resulting geometry. And then NumForce again. So better, check the results after each step. That can save you time to analyse, if someting doesn't work.

Cheers,

Arnim

ztehrani

  • Jr. Member
  • **
  • Posts: 16
  • Karma: +0/-0
Re: Finding transition state @ RI-MP2
« Reply #4 on: April 20, 2015, 10:47:05 AM »
Dear Arnim,
Hi. Thanks a lot for your comment. I'll test your recommendation.

Best Wishes

Zahra

ztehrani

  • Jr. Member
  • **
  • Posts: 16
  • Karma: +0/-0
Re: Finding transition state @ RI-MP2
« Reply #5 on: April 21, 2015, 08:12:07 AM »
Dear Arnim,
Hi. I tested the job in that way you recommended but it does not work. I used the following steps:
Define
Basis set
EHT
Statp
SCF
MP2
FREEZE
CBAS
END
I am now confused because I also tested the sample test in the following way and it finished successfully. I just used:
Define
Basis set/cc-PVDZ level
EHT
Statp
itrvec 1
And end of define. I used the following command for its running (based on the manual):
jobex -trans
 My question is that I did not define the method of calculation (i.e., DFT, RI-DFT, ri-mp2…) how this file was finished successfully. I mean it may calculated in some default method in TM. Another question is that it is necessary to define the method in control file or definition just in script file that’s ok. could you please help me what I should put in the control file @ RI-MP2 level.
Thanks in advance
Best Wishes
Zahra

Arnim

  • Developers
  • Sr. Member
  • *
  • Posts: 253
  • Karma: +0/-0
Re: Finding transition state @ RI-MP2
« Reply #6 on: April 21, 2015, 10:39:53 AM »
Dear Zahra,

if you run just 'jobex -trans' with the settings in your control file, you run a Hartree-Fock calculation. When you look in the job.last file, you will see the outputs of dscf and grad telling you this. For an MP2 calculation the ricc2 module would be used.

'jobex -h' will give short explanation of the possible options.

Best wishes,

Arnim

ztehrani

  • Jr. Member
  • **
  • Posts: 16
  • Karma: +0/-0
Re: Finding transition state @ RI-MP2
« Reply #7 on: April 26, 2015, 06:28:49 AM »
Dear Arnim,
Hi. Thank you so much for you valuable comments. I tested them and they worked correctly on serial format. I have still some running problems by using scripts including these keywords. I read your comments in this case on previous topics of TM forum. As I found, I should use “-mfile” option or other possibility for running “numforce” in parallel mode at MPI machine (not SMP). As you mentioned before this job should be submitted in node’s cpu.
(http://www.turboforum.com/index.php/topic,936.msg2633.html#msg2633). I could not find a complete script for running job (in parallel or non-parallel mode) because I do not have enough knowledge for writing scripts. I tested a lot of script such as the following one but they are not worked. It will be appreciated if you help me which script I should use for solving this problem.
Best Wishes
Zahra
#!/bin/bash

# pe request
#$ -pe mpi_NPROC NPROC

# our Job name
#$ -N INPUT

#$ -S /bin/bash

#$ -q all.q
#$ -V
#$ -cwd

WDIR=/work/${USER}/INPUT.$$
DIR=`pwd`

mkdir -p $WDIR
cp -fr * $WDIR
cd $WDIR

export PARA_ARCH=MPI
export TURBODIR=/opt/TURBOMOLE
export PATH=$PATH:$TURBODIR/bin/x86_64-unknown-linux-gnu_mpi:$TURBODIR/scripts:$TURBODIR/mpirun_scripts
export PARNODES=NPROC

#NumForce -machinefile $TMPDIR/machines -n $NSLOTS > NumForce.out
#NumForce -level cc2 -machinefile $TMPDIR/machines > NumForce.out

NumForce -level cc2 -mfile $TMPDIR/machines > NumForce.out

cp -fr * $DIR
cd $DIR
rm -fr $WDIR

exit