Author Topic: parallel run problem with MPI  (Read 8602 times)

Xiaoyan

  • Newbie
  • *
  • Posts: 5
  • Karma: +0/-0
parallel run problem with MPI
« on: May 20, 2014, 09:47:16 AM »
I am optimizing the geometry of a system with 320  atoms at DFT level,  it works well for both MPI and SMP with def2-SVP basis sets. When I increased the size of basis sets to  def2-TZVP and optimized the geometry using MPI,  the DSCF,  GRAD, and STATPT calculation for the initial geometry run properly, then the DSCF stopped for the new coordinate without any error messages.

On the other hand the same job works without problem when I using SMP.

Anybody knows the reason?

uwe

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 560
  • Karma: +0/-0
Re: parallel run problem with MPI
« Reply #1 on: July 09, 2014, 03:15:34 PM »
Hi,

the MPI version starts the processes in a different way than SMP. If you set the user limits like stack size on your shell (for example ulimit -s unlimited), this is often not sufficient if the system defaults are lower.

Did you also check the master file if there are error messages in there?

Uwe

Xiaoyan

  • Newbie
  • *
  • Posts: 5
  • Karma: +0/-0
Re: parallel run problem with MPI
« Reply #2 on: November 11, 2014, 09:38:24 AM »
Dear Uwe,

Thank you for your reply. My problem was solved by using the Turbomole 6.6 instead of Turbomole 6.3.

Kind Regards
Xiaoyan