Dear All,
I faced a problem running a ri-cc2 claculation (tm6.0) in parallel on a cluster with lustre
and a global tmpdir (i.e. tmpdir is on a global disk maintained by lustre). The calculation
fails sometimes directly after CCS calculation and sometimes when density is calculated.
The error message doesn't give much:
MPI Application rank 5 excited before MPI_Finalize() with status 13
The same calculation runs fine on another cluster with local /tmp discs, i.e. tmpdir
is local for each node. From this follows that the problem is somehow related
with the use of the global tmpdir.
Any ideas how to run TM ricc2 in parallel with global tmpdir?
Many thanks!
Best regards,
Evgeniy