Author Topic: HP-MPI and distribution by node  (Read 7345 times)


  • Newbie
  • *
  • Posts: 3
  • Karma: +0/-0
HP-MPI and distribution by node
« on: July 22, 2013, 10:34:47 AM »
Dear all,

I'm facing memory problems with ridft_huge, e.g. I've got 4 GB per core, but I want to use more. Physically, the machine holds 64GB for 16 cores, so I'd like to occupy 8 GB per process, using only one half of the cores of the machines. To complicate things a little, this should work with my queueing system. To clarify what I'm talking about, here's an example for Open-MPI:

#PBS -l nodes=128
mpirun -np 64 --bynode exe

I'm asking the queueing system for 128 cores, but I'm using just half of it (mpirun line). The switch "--bynode" causes distribution of the processes on all machines equivalently. So I can access 8GB per process.

Is there something similar for HP-MPI which comes with TURBOMOLE?

Thanks a lot



  • Sr. Member
  • ****
  • Posts: 216
  • Karma: +1/-0
Re: HP-MPI and distribution by node
« Reply #1 on: July 27, 2013, 05:51:23 PM »
Hi Alex,

I'm not familiar with all PBS implementations, but at least in the more modern ones you should be able to set the number of processes for each node with the "ppn" resource option:

# Start 8 processes on each machine
#PBS -l nodes=128:ppn=8
mpirun -np 64 ridft_huge

This should work for all MPI implementations. But it might be best to ask your system administrator, what is the recommended way to start such jobs. You could also just ask for 64 nodes, but use the "pmem" resource option to ask for 8 GB per process. Since this is what you really need, it might result in more efficient distribution of the resources. Also, there are quite a few ways to tune the memory usage of ridft (ricore,  ricore_slave) and playing with these might reduce the memory requirements.