Lunching several MPI processes on multicore nodes

Lunching several MPI processes on multicore nodes

Hi everyone,

I have a simple issue, which must have a solution. Is it possible to assign several MPI processes to several nodes, such that first MPI process occupies full node, whereas other MPI processes are distributed on cores of the other nodes?

I have an example below:

On a cluster with 4 cores per node, to assign 2 MPI process to 2 nodes I do the following:

#PBS -l nodes=2:ppn=4

mpirun -pernode -np 2 ./hybprog

The question is how to assign 8 MPI processes to 3 nodes, such that first MPI process occupies first node, whereas other 7 MPI processes are distributed on 7 cores of the other two nodes? 

Best Regards,






4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

This is not the best forum for such a question. I suspect you will need to enumerate the stuff explicitly in your mpirun or mpiexec and ask your site PBS expert.

If you want to ask specifically about your version of PBS in combination with a specific implementation of mpiexec or mpirun, you might start with an appropriate FAQ


OSU mpiexec: 


and there are help forums associated with those.

Intel MPI could be asked on the companion HPC/cluster forum.

I am not too familiar with PBS, but I did something similar with SLURM several years ago.  I learned how to do this by reading the documentation for SLURM where I found that it is possible to assign MPI processes to sockets and cores within specific nodes. I did have to work with the SLURM admins to enable a few things that had not been set initially.  Hopefully you can also do this with PBS.


-- Rashawn

Best Reply


It might be interesting to others, so I write the solution below: 

       by creating hostfile and rankfile with explicit distribution of the processes among cores of the available cluster nodes.

Best regards,


Leave a Comment

Please sign in to add a comment. Not a member? Join today