Hi Arnim,
Thanks for your response.
could you give a bit more input?
How is $macor set? How large is your system and
how many nodes are you using?
Sorry, is $macor an option of the control file? It is not present in
my control file. Could you explain what it specifies.
Regarding the system, it is very small, just the water molecule
(it's a test calculation). The dscf run has run fine, but ricc2 crashes.
I am using 4 nodes, with 4 processors each. I have run this job
as a serial run, and it went ok. The ulimits seem to be also fine,
the stack size is set to "unlimited".
The job crashes in the very begining, just when optimization of
the ground state cluster amplitudes begins, i.e. even MP2 energy is
not calculated. The end of the output looks the following:
ri-cc2 ended abnormally
lbfmax,mbfblk: 1 0
========================
internal module stack:
------------------------
ricc2
cc_solve_t0
ccvecfun
cc_jgterm
========================
too small mbfblk in ccbasblk
ri-cc2 ended abnormally
ri-cc2 ended abnormally
ri-cc2 ended abnormally
ri-cc2 ended abnormally
MX:opt052:Remote endpoint is closed, peer=00:60:dd:47:b6:cc (opt055:0)
MPI Application rank 0 exited before MPI_Finalize() with status 13
Evgeniy