Author Topic: Huge Binaries  (Read 6056 times)

idet2

  • Jr. Member
  • **
  • Posts: 21
  • Karma: +0/-0
Huge Binaries
« on: June 27, 2009, 03:08:42 PM »
I am trying to optimize a structure using the HUGE_Binaries and I have the following problem. Although the calculation starts using the ridft_huge module
after a few minutes and before entering the SCF iterations part the calculation breaks with a dscf error in job.last

I have the impression that because I am putting almost the maximum amount of memory on the ricore there is no available memory and because it
goes to scratch that's the reason why it fails. I have read somewhere that the huge modules consume much more amount of memory than the other ones.
So, I would like to know a typical value of this in order to define accordingly the ricore.


uwe

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 558
  • Karma: +0/-0
Re: Huge Binaries
« Reply #1 on: June 29, 2009, 11:23:35 AM »
Hi,

the huge binaries do not need that much more memory than the usual version any more. The only difference lies in the amount of static memory that is allocated - you can determine that with the two outputs of size ridft and size ridft_huge.

If an energy calculations stops without error message, it is in almost all cases due to a memory problem. Either the physical memory is too small or the user limits are not set to unlimited or to a large value. Especially the stack size limit is causing that, see

http://www.turbo-forum.com/index.php?topic=23.0

Uwe

idet2

  • Jr. Member
  • **
  • Posts: 21
  • Karma: +0/-0
Re: Huge Binaries
« Reply #2 on: June 30, 2009, 08:43:15 AM »
Thx for the info!

What I did to solve the problem was to reduce the ricore by 1000MB
and then there was no problem. The same input on the same node etc.
with less ricore is running without any problem yet (cross your fingers....)!

Regards