Hello,
First of all, thanks to the Turbomole team for the 5.10 release. The new additions to the feature set were very useful. It was also great to notice that the parallel em64t-binaries are now much more stable than in previous releases. I have tested jobs with up to 9000 basis functions, and all have run successfully in parallel. We can now use em64t binaries on Intel-based machines instead of x86_64 ones, resulting in a nice speedup (something like 30%).
I also have two small issues to report about the new release:
1) We have set up our computational facilities using the pretty popular Rocks Cluster distribution. The default naming scheme for compute nodes in Rocks is compute-0-0, compute-0-1, …, which leads into some problems with NumForce in TM 5.10. NumForce seems to add a unique number to all cpus on SMP hosts using "-" as a separator:
machinename=$host"-"$i
Then, later on, the login name is parsed with cut, using "-" as a separator
loginname=`echo $machine | cut -f 1 -d "-"`
When the compute nodes are named as compute-x-x, loginname is always set to "compute", causing NumForce to fail.
Luckily, fixing the issue was easy, changing all "-" separators to "_" did the trick. Perhaps the next released version of NumForce could use some other separator than "-"? This would save all Rocks users from patching NumForce.
2) All the mpirun_scripts were missing the line
em64t* ) echo '$parallel_platform MPP' >> control ;;
when setting $parallel_platform, causing $parallel_platform to be set to "cluster" on em64t machines. In TM 5.9.1, mpirun_scripts did set $parallel_platform to MPP on em64t (at least in the scripts that were available on FTP server).