Author Topic: Speeding up NumForce with Parallel Single Points?  (Read 35102 times)

Dempsey

  • Jr. Member
  • **
  • Posts: 20
  • Karma: +0/-0
Speeding up NumForce with Parallel Single Points?
« on: October 05, 2024, 10:39:35 PM »
Dear Users,

I am looking for ways to speed up my NumForce calculation. I have 402 displacements and thus 402 single point energy calculations to run. I can see these single points are running serially in the ./numforce/KraftWerk directory.

Is there a setting that allows me to decide how many of these single points run in parallel? Or perhaps set how many cores work on each single point in a NumForce calculation? I suppose running more calculations at in parallel, but using less cores for each calculation, may not result in any speed-up at all. Nevertheless, if it is an option I will test it, please let me know your thoughts.

I am also aware that frequency calculations tend to run faster with more memory, is this mainly something that applies to aoforce or could changing $maxcor and/or $ricore help NumForce too? Is there a way to see if this is limiting me from the output?

Thanks,
Dempsey

uwe

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 612
  • Karma: +0/-0
Re: Speeding up NumForce with Parallel Single Points?
« Reply #1 on: October 06, 2024, 01:16:44 AM »
Hi,

if you are using a not too old version of Turbomole (not older than 5 or 6 years), NumForce will run on as many cores as you specify with $PARNODES in the SMP parallel version by default. So there is nothing you have to set, just the usual PARA_ARCH=SMP and PARNODES=<number-of-cores> to run in parallel on one machine.

If you want to run NumForce on a cluster with many different nodes, create a file with the name of the systems you want to run NumForce single-point jobs on. E.g.:

linux1
linux1
linux1
linux2
linux2
etc.

a separate line for each core. Most queuing system provide that information in an environment variable.
Then use this file as input for the -mfile option of NumForce.

Run NumForce -help to see a list of available options.

ccamacho

  • Newbie
  • *
  • Posts: 1
  • Karma: +0/-0
Re: Speeding up NumForce with Parallel Single Points?
« Reply #2 on: September 10, 2025, 05:34:52 PM »
Following on this question because I think I am missing something. If I have 5 nodes and 64 cores per node and I choose to run the sequencial version of TURBOMOLE, does it mean that NumForce will generate 320 different geometries and run all of them at the same time? Going through the NumForce script I got the impression that it actually takes the information from the quequing system by default and generates all the inputs by itself. Is this correct or am I missing something? Because when I submit a job instead of running 320 geometries at the same time it only runs 1 geometry in 1 core.

# use nodefile supplied by PBS as default in parallel runs
if [ -n "$PBS_NODEFILE" ]; then # use pbs env variable to set mfile
  mfile=$PBS_NODEFILE
elif [ -n "${SLURM_JOB_ID}" ] && [ -z ${HOSTS_FILE} ] ; then # use parsed slurm-nodelist
« Last Edit: September 10, 2025, 06:10:02 PM by ccamacho »