Author Topic: Turbomole 6.0 and dscf/grad/ridft/rdgrad server process CPU usage  (Read 7556 times)


  • Sr. Member
  • ****
  • Posts: 216
  • Karma: +1/-0

While running some test calculations with Turbomole 6.0 I noticed that the server process of parallel dscf/grad/ridft/rdgrad calculations is behaving somehow differently in comparison to versions 5.9.1 and 5.10. With the older versions we were able to prevent the server process from using too much CPU time on SMP machines by adding options  "-intra=nic -e MPI_FLAGS=y0" to all mpirun_scripts (details: However, it seems that with version 6.0 these options do not help anymore, leading into 100% CPU consumption by the server process.

Below is one (short) set of parallel run statistics reported with MPI_FLAGS=T option to illustrate the nature of the problem. This was a fully direct dscf run that took 10 SCF iterations to converge. Mpirun options in this case were -e MPI_FLAGS=y0,T intra=nic -np 2.

MPI Rank        User (seconds)      System (seconds)
    0                    39.63                102.49
    1                   138.13                  4.15
    2                   138.22                  4.01
              ----------------      ----------------
Total:                  315.98                110.65

It seems that the master process is mostly using system time. The behavior of grad, ridft, and rdgrad is practically identical to dscf: The master process consumes 100% CPU time  and about 70% of this time is "System" time.

When running the same calculation with Turbomole 5.10, the master process uses practically no CPU at all. Below are the statistics of an identical test run with version 5.10 (a fully direct dscf calculation, mpirun options -e MPI_FLAGS=y0,T intra=nic -np 2).

MPI Rank        User (seconds)      System (seconds)
    0                     0.68                  0.67
    1                   127.03                  2.69
    2                   125.98                  2.16
              ----------------      ----------------
Total:                  253.69                  5.52

Both calculations were run on the same machine (4-core Intel EM64T SMP with three cores available for the test calculation).

Does anybody have similar experiences with parallel Turbomole 6.0 calculations? We are working out the issue with Turbomole support, but it would also be nice to hear good ideas or suggestions from other users.


  • Newbie
  • *
  • Posts: 6
  • Karma: +0/-0
Re: Turbomole 6.0 and dscf/grad/ridft/rdgrad server process CPU usage
« Reply #1 on: March 22, 2009, 11:57:24 AM »
I have the same problems (EM64T version) here and had already informed the support.
At the moment I am using ver. 5.9 since it is faster than both 5.10 and 6.0 for parallel runs. 


  • Jr. Member
  • **
  • Posts: 16
  • Karma: +0/-0
Re: Turbomole 6.0 and dscf/grad/ridft/rdgrad server process CPU usage
« Reply #2 on: March 26, 2009, 03:58:10 PM »
I have experienced the same issue.  For v5.10, I used the mpirun options/flags you mentioned and was able to control CPU usage as you described.  However, this is not working for v6.0.