TURBOMOLE Users Forum
TURBOMOLE Modules => Ridft, Rdgrad, Dscf, Grad => Topic started by: Molcasito on June 09, 2008, 09:27:08 AM
-
Hello All TURBOMOLE Users,
I have a calcualtion about 113 atoms, and I try to do a DSCF calculations, when it starts gives following error
and stops. It created the integrals, but I do not know where this error is coming from.
Has anyone seen it before? Any help about how to run it.
Many thanks
maximum number of buffers which may be written onto 2e-file(s) = 11328
STARTING INTEGRAL EVALUATION FOR 1st SCF ITERATION
time elapsed for pre-SCF steps : cpu 69.956 sec
wall 70.047 sec
<put> : FILE SPACE EXHAUSTED --> switching to 'direct' mode !
written (= 11328) + newbuffer > max (= 11328)
wrote 3 tasks to file
<putend> : FILE SPACE EXHAUSTED --> switching to 'direct' mode !
written (= 11328) + newbuffer > max (= 11328)
23201792 2 e - integrals written in 11328 blocks requiring 181248 k-byte
l-qun in <asra> messy
dscf ended abnormally
-
This looks as a bug. Please send a bug report to the support desk (COSMOLogic).
Christof
-
Hi,
Thanks, Molcasito, for sending us the input files. I think there are two things we can learn from this:
- the file size of the twoint file (keyword $scfintunit) was negative. Since the value is added automatically by the pre-step of the parallel calculation, Turbomole should change that to zero in the next version. Better not to use scratch files than to get errors.
The settings of $thime and $thize which are the thresholds for dscf to decide which integrals are expensive enough to write them to disk are a little bit too old meanwhile, I guess the default values should be changed.
- dscf tries to assign a certain minimum number of integral evaluations to each parallel task, but again the default value is too small meanwhile since people can calculate much larger molecules nowadays. This can be changed by setting the maxtask value in $parallel_parameters to a higher number, and dscf tells the user to raise that value if it fails to determine a reasonable task distribution. But there is an upper limit for maxtask which depends on the size of the molecule and the basis set! So if maxtask is set to a too high value (something between 1000 and 5000 should work), one gets the error message that Molcasito got.
Uwe
-
Hi all.
I get the same warning with the escf module [TM 5.10], which, at my knowledge, is not parallelized yet.
The job wasn't killed by the program, however, but it's going very slow [it's a 50 atoms system/TZVP, 19 exc. states tough].
I set the path of the twoint files on a local scratch directory and I took their size from the previous dscf.statistics run. The parallel_parameters maxtask is 2000 and I didn't experience any problem with the dscf and the jobex optimization which ran as parallel jobs and were very fast, as usual.
What's my mistake?
Should I act on $thime or $thize keywords? And, if so, how?
Any help will be appreciated.
onthefly
@-------------------------------------------------------------------------
Iteration IRREP Converged Max. Euclidean
roots residual norm
<put> : FILE SPACE EXHAUSTED --> switching to 'direct' mode !
written (= 956544) + newbuffer > max (= 956544)
<putend> : FILE SPACE EXHAUSTED --> switching to 'direct' mode !
written (= 956544) + newbuffer > max (= 956544)
1959004160 2 e - integrals written in 956544 blocks requiring ******* k-byte
1 a 0 1.518674155085859D-01
[...]
-
A escf run for 19 states will of cause take considerably longer than a (sequential) dscf run. If you run dscf parallel, I recommend to do for escf a new sequential statistics run for dscf. However, for 19 excited states you will not save such a larger fraction of the wall time by running escf semi-direct.
Christof