Author Topic: negaitve twoint file size after statistic run  (Read 6606 times)


  • Newbie
  • *
  • Posts: 1
  • Karma: +0/-0
negaitve twoint file size after statistic run
« on: May 02, 2007, 07:20:27 PM »
Dear community,

The statistics run always ends writing a negative file size for the twoint file. After changing this negative number to a positive number or zero and resubmission of the job it runs fine. Is there a way to work around that?




  • Global Moderator
  • Sr. Member
  • *****
  • Posts: 492
  • Karma: +0/-0
Re: negaitve twoint file size after statistic run
« Reply #1 on: September 26, 2008, 09:08:04 PM »

it used to be faster to use local scratch files for integral storage on clusters, but meanwhile the systems that can be treated get larger and larger. And the speed of I/O has not dramatically increased during the last years - compared to the reduction of CPU time. Not to forget the increasing number of CPU cores per node - and per hard disk....

Technically the negative file size is an integer(32 bit) overflow, but we have chosen not to change that to avoid excessive disk usage - which would slow down the calculation or even bring your hole system down.

dscf of the next Turbomole release will not use disk space by default, so the problem with negative file size will not occur any more - while it still will be possible to add that feature manually.

Meanwhile you could patch the dscf script in $TURBODIR/mpirun_scripts - instead of canceling the job when having negative file size, you could set it to zero.

Another possibility which many users seem to do is to set $thime and $thize to higher values:

$thime 99
$thize 1.0

before starting the parallel calculation. $thime and $thize determine how 'long' and how 'big' a integral evaluation will take or be to be worth being stored to disk.