MPI Rank Binding

MPI Rank Binding

Hello all,

Intel MPI 4.1.3 on RHEL6.4: trying to bind ranks in two simple fashions:(a) 2 ranks to the same processor socket and (b) 2 ranks to different processor sockets.

Looking at the Intel MPI Reference Manual (3.2. Process Pinning pp.98+), we should be able to use options in mpiexec.hydra when the hostfile points to the same host

-genv I_MPI_PIN 1  -genv I_MPI_PIN_PROCESSOR_LIST all:bunch
-genv I_MPI_PIN 1  -genv I_MPI_PIN_PROCESSOR_LIST all:scatter


Unfortunately, the "scatter" option STILL binds the MPI ranks to the same socket. 

Should I be using the  "I_MPI_PIN_DOMAIN" instead?

Any sugestions?


Thanks, Michael

2 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Can you try using these options: 


"-genv I_MPI_PIN_DOMAIN=core -genv I_MPI_PIN_ORDER=compact"



"-genv I_MPI_PIN_DOMAIN=core -genv I_MPI_PIN_ORDER=scatter"


Leave a Comment

Please sign in to add a comment. Not a member? Join today