Running an MPI/OpenMP* Program
- Make sure the thread-safe (debug or release, as desired) Intel® MPI Library configuration is enabled (release is the default version). To switch to such a configuration, sourcempivars.[c]shwith the appropriate argument, see Selecting Library Configuration for details. For example:$ source mpivars.sh release
- Set theI_MPI_PIN_DOMAINenvironment variable to specify the desired process pinning scheme. The recommended value isomp:$ export I_MPI_PIN_DOMAIN=ompThis sets the process pinning domain size to be equal toOMP_NUM_THREADS. Therefore, if for exampleOMP_NUM_THREADSis equal to4, each MPI process can create up to four threads within the corresponding domain (set of logical processors). IfOMP_NUM_THREADSis not set, each node is treated as a separate domain, which allows as many threads per MPI process as there are cores.
- NoteFor pinning OpenMP* threads within the domain, use the Intel® compilerKMP_AFFINITYenvironment variable. See the Intel compiler documentation for more details.
- Run your hybrid program as a regular MPI program. You can set theOMP_NUM_THREADSandI_MPI_PIN_DOMAINvariables directly in the launch command. For example:$ mpirun -n 4 -genv OMP_NUM_THREADS=4 -genv I_MPI_PIN_DOMAIN=omp ./myprog