Recent Posts

Pages: [1] 2 3 ... 10
1
Ricc2 / Are two-component RIMP2-RICC2 calculations possible ?
« Last post by vvallet on October 05, 2021, 04:21:21 PM »
Is it currently possible to perform two-component RICC2 calculations ? I have seen that 2c auxiliary basis sets exist for RI-J and RI-JK for example def2-TZVP-2c, what about the auxiliary correlation basis sets? Shall one use the "standard" ones ?
Thanks for your tips
2
Ridft, Rdgrad, Dscf, Grad / PEECM IS NOT WORKING - UNIT CELL IS NOT CHARGE NEUTRAL
« Last post by skundu07 on October 01, 2021, 01:06:36 AM »
Hello - I am seeing following error while doing peecm calculation. It is saying that unit cell is not charge neutral. Although my charges are balanced and meet the tolerance limit.

 Charge Neutrality tolerance :   0.1000000D-04
 Total charge                :  -0.2138029D-12

I am seeing following error :
 1e-integrals will be neglected if expon. factor < 0.643180E-13

========================
 internal module stack:
------------------------
    ridft
    allone
    symoneint
========================

 fatal error for MPI rank    0

 Unit cell not charge neutral!
 ridft ended abnormally

DOES ANY ONE HAVE ANY IDEA ON THIS ERROR?

My CONTROL FILE IS :
$title
$symmetry c1
$user-defined bonds    file=coord
$coord    file=coord
$embed    file=embedded
$optimize
 internal   off
 redundant  off
 cartesian  on
 global     off
 basis      off
$atoms
h  1-52,180                                                                    \
   basis =h def2-TZVP                                                          \
   jbas  =h def2-TZVP
si 53-91,93-103                                                                \
   basis =si def2-TZVP                                                         \
   jbas  =si def2-TZVP
al 92                                                                          \
   basis =al def2-TZVP                                                         \
   jbas  =al def2-TZVP
o  104-179                                                                     \
   basis =o def2-TZVP                                                          \
   jbas  =o def2-TZVP
$basis    file=basis
$scfmo   file=mos
$closed shells
 a       1-687                                  ( 2 )
$scfiterlimit       6000
$thize     0.10000000E-04
$thime        5
$scfdamp   start=0.300  step=0.050  min=0.100

$scfintunit
 unit=30       size=0        file=twoint
$scfdiis
$maxcor    500 MiB  per_core
$scforbitalshift  automatic=0.3
$drvopt
   cartesian  on
   basis      off
   global     off
   hessian    on
   dipole     on
   nuclear polarizability
$interconversion  off
   qconv=1.d-7
   maxiter=25
$coordinateupdate
   dqmax=0.3
   interpolate  on
   statistics    5
$forceupdate
   ahlrichs numgeo=0  mingeo=3 maxgeo=4 modus=<g|dq> dynamic fail=0.3
   threig=0.005  reseig=0.005  thrbig=3.0  scale=1.00  damping=0.0
$forceinit on
   diag=default
$energy    file=energy
$grad    file=gradient
$forceapprox    file=forceapprox
$dft
   functional wb97x
   gridsize   m4
$scfconv   7
$jbas    file=auxbasis
$ricore      500
$rij
$disp4
$rundimensions
   natoms=180
   nbf(CAO)=5196
   nbf(AO)=4561
$last step     define
$pop  nbo
$end

3
Escf and Egrad / Re: Symmetry breaking in evGW using contour deformation
« Last post by martijn on September 23, 2021, 11:10:58 PM »
Thanks Christof! Setting npoints to 512 or 1024 without qpeiter gives indeed results with minimal symmetry breaking that are very close to those obtained with analytical continuation:

512 points
 144b3      -6.050    -7.191   -17.677   -18.760     1.084   -16.536     1.000       0.00   
 144b2      -6.050    -7.189   -17.675   -18.760     1.085   -16.536     1.000       0.00   
 144b1      -6.050    -7.190   -17.677   -18.761     1.084   -16.536     1.000       0.00   
 144a       -6.028    -7.170   -17.573   -18.646     1.073   -16.431     1.000       0.00   
 ------------------------------------------------------------------------------------------
 145a       -3.896    -2.836    -8.982    -6.883    -2.099   -10.041     1.000       0.00   
 145b1      -3.624    -2.510    -9.878    -7.784    -2.094   -10.992     1.000       0.00   
 145b3      -3.624    -2.510    -9.878    -7.784    -2.094   -10.992     1.000       0.00   
 145b2      -3.624    -2.511    -9.879    -7.784    -2.094   -10.992     1.000       0.00


1024 points
 144b3      -6.050    -7.191   -17.677   -18.760     1.084   -16.536     1.000       0.00   
 144b2      -6.050    -7.190   -17.676   -18.760     1.084   -16.536     1.000       0.00   
 144b1      -6.050    -7.191   -17.677   -18.761     1.084   -16.536     1.000       0.00   
 144a       -6.028    -7.170   -17.573   -18.646     1.073   -16.431     1.000       0.00   
 ------------------------------------------------------------------------------------------
 145a       -3.896    -2.836    -8.982    -6.883    -2.099   -10.041     1.000       0.00   
 145b1      -3.624    -2.510    -9.878    -7.784    -2.094   -10.992     1.000       0.00   
 145b3      -3.624    -2.510    -9.878    -7.784    -2.094   -10.992     1.000       0.00   
 145b2      -3.624    -2.511    -9.879    -7.784    -2.095   -10.992     1.000       0.00


GW block:
$rick
$rigw
  rpa
  evgw
  npoints 1024
  mxdiis       8
  eta       0.00100000
  contour
  contour start=142 end=146 irrep=1 ! a
  contour start=142 end=146 irrep=2 ! b1
  contour start=142 end=146 irrep=3 ! b2
  contour start=142 end=146 irrep=4 ! b3
4
Ricc2 / Predicted spectrum from RICC2 calculation
« Last post by Martylea on September 22, 2021, 04:30:02 PM »
Hi there

I am trying to calculate UV/Vis absorption spectra for a few molecules using the RI-CC2 method. I have calculated for the first two singlet excited states and reached converged calculations for each of these. I get the output out in the exstates file, and also have $spectrum nm flag in my control file to ensure it prints them. I was expecting the $spectrum keyword to generate a predicted UV/Vis using Lorentzian/Gaussian broadening to create peak shapes etc, but I am not sure how to do this with Turbomole. I am a relatively new user though. I have tried to perform the calculations in TMoleX, but struggle to use the GUI base and much prefer to perform calculations through command line.

In short, how to plot a predicted spectra?

Thanks
5
Parallel Runs / Re: SLURM multinode parallel problem
« Last post by saikat403 on September 21, 2021, 06:55:02 PM »
I found a way out setting ulimit -s unlimited in bashrc seems to solve the problem
Thanks
6
Parallel Runs / Re: SLURM multinode parallel problem
« Last post by saikat403 on September 20, 2021, 01:45:37 PM »
Thanks for the reply

I have added two more keywords
Code: [Select]
echo $HOSTNAME >> mylimits.out
ulimit -s  unlimited
ulimit -a >> mylimits.out

and output of the mylimits.out
Code: [Select]
cn337
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 768120
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) 176128000
open files                      (-n) 131072
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

you are right this is from one node.
Is there a way to set same for the other node?

or I have to ask to change /etc/security/limits.conf ?
7
Parallel Runs / Re: SLURM multinode parallel problem
« Last post by uwe on September 20, 2021, 12:56:50 PM »
Hello,

segmentation faults often happen due to too small memory limits, especially the stack size limit is causing those crashes.

See: https://forum.turbomole.org/index.php/topic,23.0.html

Note that queuing systems often set the stack size limit themselves. If you have 'ulimit -s unlimited' in your submit script for SLURM, then this is only done on the first node. The processes that are started by MPI on the other nodes will have the default stack size limit and it seems that it is too small on the cluster you are using.

Regards
8
Parallel Runs / SLURM multinode parallel problem
« Last post by saikat403 on September 20, 2021, 12:40:41 PM »
Hello all,
I am using turbomole 7.3 in slurm queuing system. Both ridft and rdgrad running perfectly fine in a single node with 40 procs. However, when I am trying to parallel it in more than one node rdgrad seems to create a problem. Here is the slurm script I am using
Code: [Select]
#!/bin/bash
#SBATCH -J turbo-test   
#SBATCH -p standard-low
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH -t 00:15:00    # walltime in HH:MM:SS, Max value 72:00:00

export TURBODIR=/home/17cy91r04/apps/turbomole730
export PARA_ARCH=MPI
export PATH=$TURBODIR/bin/`$TURBODIR/scripts/sysname`:$PATH
export TURBOMOLE_SYSNAME=x86_64-unknown-linux-gnu
export PATH=$TURBODIR/bin/${TURBOMOLE_SYSNAME}_mpi:$TURBODIR/mpirun_scripts:$TURBODIR/scripts:$PATH
export PARNODES=$SLURN_NTASKS
jobex -ri -c 999 > jobe.out

The error I am getting in job.1 like this
Code: [Select]
rdgrad_mpi         0000000000433429  Unknown               Unknown  Unknown
rdgrad_mpi         00000000029265FD  Unknown               Unknown  Unknown
rdgrad_mpi         0000000000476960  dlp3_                     427  dlp3.f
libpthread-2.17.s  00007FD82B82C630  Unknown               Unknown  Unknown
libpthread-2.17.s  00007F086A13C630  Unknown               Unknown  Unknown
forrtl: severe (174): SIGSEGV, segmentation fault occurred
libpthread-2.17.s  00007FCBAA17B630  Unknown               Unknown  Unknown
rdgrad_mpi         0000000000476960  dlp3_                     427  dlp3.f
rdgrad_mpi         0000000000471B2F  twoder_                   418  twoder.f
rdgrad_mpi         0000000000480C0E  dasra3_                   155  dasra3.f
rdgrad_mpi         0000000000480C0E  dasra3_                   155  dasra3.f
rdgrad_mpi         0000000000480C0E  dasra3_                   155  dasra3.f
rdgrad_mpi         0000000000471B2F  twoder_                   418  twoder.f

Is there anything wrong with the slurm script?
How to make turbomole parallel over the node?
Thanks in advance.

with regards,
saikat

9
Define / Re: Is it possible to increase the maximum number of atoms and basis functions?
« Last post by uwe on September 15, 2021, 04:42:41 PM »
Hi there,

as old posts are often found and not being recognized as outdated, here's an update:

Turbomole does not have a limit for the number of atoms or basis functions any more.
However, the size of the input is limited by your hardware, especially the amount of memory that is available.

Some modules like dscf do hardly need any memory, others like 2nd derivatives (aoforce,...) have higher requirements.
10
Escf and Egrad / Re: Egrad
« Last post by uwe on September 15, 2021, 03:47:11 PM »
Hello,

this is a strange error message as it a) should only happen if there is a problem with memory consumption and b) at the very end of egrad.

Could you please send the in- and output to the Turbomole support team? Thanks!
Pages: [1] 2 3 ... 10