Author Topic: <trfdrs>: too big number of records gamma#1  (Read 1780 times)

straka

  • Newbie
  • *
  • Posts: 2
  • Karma: +0/-0
<trfdrs>: too big number of records gamma#1
« on: February 12, 2025, 04:24:56 PM »
Dear all,

doing  MP2 shifts for an Ag compound, non-relativistic for a purpose, using ANO-RCC-unc (g and h functions cut manually) ,
MPSHIFT arrives at:   <trfdrs>: too big number of records gamma#1...
Not sure, where the error is, could not find it anywhere here. 

All the Best, Michal

uwe

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 585
  • Karma: +0/-0
Re: <trfdrs>: too big number of records gamma#1
« Reply #1 on: February 13, 2025, 05:27:27 PM »
Hi Michal,

did you run the m2prep script first?

mp2prep -c

runs one (or several) 'dry/statistics' jobs to check how much memory and disk space the MP2 NMR chemical shielding job is going to need. This is conventional MP2, so it is quite resource intense...

If you did run it, check the sizes of the files in $mointunit. Perhaps the job is too big to be run in one loop. Not necessarily the memory and disk space requirements, but also some internal limits (this is partly very old code in Turbomole which still assumes to run into problems when consuming too much).

You can try to split up the calculations in parts such that mpshift loops over a subset of occupied orbitals. This will reduce the hardware requirements, but the calculation will take longer. The number of loops is set by $traloop <number>.

Try with

mp2prep -c -l 10

and rerun. This will force mpshift to split the calculation into 10 parts (they all run one after the other, so be patient) while reducing the sizes of almost everything (memory, disk space, ...). The -l option just tells mp2prep to start with a minimum of 10 loops.

If it takes much much too long: Those 10 (or whatever you set in $traloop) passes can also be run by different processes, so a course-grained parallelization can be used. It needs to be set up manually, but the procedure is not overwhelmingly complicated.  See $trast and $trand keywords in the manual.
Disclaimer: I did not use this option since many years, but there is a test in the usual test suite that comes with Turbomole, so it should still work...

straka

  • Newbie
  • *
  • Posts: 2
  • Karma: +0/-0
Re: <trfdrs>: too big number of records gamma#1
« Reply #2 on: February 14, 2025, 11:39:14 AM »
Dear Uwe,

nice to hear from you after some years!
Thanks a lot. I did run beforehand the mp2prep -c and it looks like I have memory and disk ok.
I also noted a message above the error, which I should have reported before:

estimated number of records for dtdb#1-file =     687813
 exceeds maximum given by parameter ndirec   =     500000
....
  <trfdrs>: too big number of records dtdb#1

So I tested mp2prep -c -l n , needed n=6 this time,  to get the dtdb#1 under 5000 and it works for smaller system, not sure how effective it will be.   For my largest system, this needs n=16, so I will consider the trast and trand trick which I have used long time ago.

Not sure, whether this is related to using large uncontracted basis set for Ag ( (21s18p13d6f).

I just need one atom, so I am now experimenting if nucsel has effect.

Thanks a lot,

Best wishes,
Michal
 


All the Best, Michal