TURBOMOLE Users Forum

Installation and usage of TURBOMOLE => Parallel Runs => Topic started by: Xiaoyan on May 20, 2014, 09:47:16 AM

Title: parallel run problem with MPI
Post by: Xiaoyan on May 20, 2014, 09:47:16 AM
I am optimizing the geometry of a system with 320  atoms at DFT level,  it works well for both MPI and SMP with def2-SVP basis sets. When I increased the size of basis sets to  def2-TZVP and optimized the geometry using MPI,  the DSCF,  GRAD, and STATPT calculation for the initial geometry run properly, then the DSCF stopped for the new coordinate without any error messages.

On the other hand the same job works without problem when I using SMP.

Anybody knows the reason?
Title: Re: parallel run problem with MPI
Post by: uwe on July 09, 2014, 03:15:34 PM
Hi,

the MPI version starts the processes in a different way than SMP. If you set the user limits like stack size on your shell (for example ulimit -s unlimited), this is often not sufficient if the system defaults are lower.

Did you also check the master file if there are error messages in there?

Uwe
Title: Re: parallel run problem with MPI
Post by: Xiaoyan on November 11, 2014, 09:38:24 AM
Dear Uwe,

Thank you for your reply. My problem was solved by using the Turbomole 6.6 instead of Turbomole 6.3.

Kind Regards
Xiaoyan