Hello,
Your script seems very nice, I think that just some minor tweaking is required. It seems that you write a list of available nodes in file "machines", but do not set the Turbomole environment variable HOSTS_FILE that informs TM about your own hostfile. The TM 6.0 mpirun_scripts contain the following section:
# check for environment variable containing hostfile name.
MACHINEFILE=""
if [ -n "${PBS_NODEFILE}" ]; then # PBS/Torque/Maui
MACHINEFILE="${PBS_NODEFILE}"
elif [ -n "${HOSTS_FILE}" ]; then # manual settings
MACHINEFILE="${HOSTS_FILE}"
elif [ -n "${TMPDIR}" -a -f "${TMPDIR}/machines" ]; then # LSF, untested
MACHINEFILE="${TMPDIR}/machines"
KNOTEN=$NSLOTS
fi
and even though the script says that the last option is for "LSF", it actually works for SGE, as well. So, because you have not set HOSTS_FILE, the mpirun_scripts will now read the file ${TMPDIR}/machines and use the SGE variable $NSLOTS as the number of computing processes. Furthermore, the mpirun_scripts add one additional CPU for the dscf/grad/ridft/rdgrad server process:
if [ "${PARA_ARCH}" = "MPI" ] ; then
KNOTEN=`expr $KNOTEN \+ 1`
fi
So, as your script uses the SGE setting "#$ -pe mpich1 17", you will end up with 18 processes, 17 of which are computing processes and the last one is the server process. Now, the server process should not be consuming much CPU, but as we have already discussed in the another thread, it actually does in the case of TM 6.0.
My suggestions:
1) Define the environment variable HOSTS_FILE in your script after setting PARA_ARCH and PARNODES:
setenv HOSTS_FILE machines
Note that in case of SGE, you should also be able to find the list of available nodes from file "$TMPDIR/machines". Now Turbomole should respect your PARNODES setting.
2) For clarity, I suggest that you move all SGE commands (#$) into the beginning of the file, right after the "#!/bin/csh"
Hope this helps,
Antti