Intel® MPI Library

Controlling Process Placement

Placement of MPI processes over the cluster nodes plays a significant role in application performance. Intel® MPI Library provides several options to control process placement.

By default, when you run an MPI program, the process manager launches all MPI processes specified with -n on the current node. If you use a job scheduler, processes are assigned according to the information received from the scheduler.

Selecting Fabrics

Intel® MPI Library enables you to select a communication fabric at runtime without having to recompile your application. By default, it automatically selects the most appropriate fabric based on your software and hardware configuration. This means that in most cases you do not have to bother about manually selecting a fabric.


If you have a previous version of the Intel® MPI Library for Linux* OS installed, you do not need to uninstall it before installing a newer version.

Extract the l_mpi[-rt]_p_<version>.<package_num>.tar.gz package by using following command:

$ tar –xvzf l_mpi[-rt]_p_<version>.<package_num>.tar.gz

This command creates the subdirectory l_mpi[-rt]_p_<version>.<package_num>.


Intel® MPI Library supports the following debuggers for debugging MPI applications: GDB*, TotalView*, and Allinea* DDT. Before using a debugger, make sure you have the application debug symbols available. To generate debug symbols, compile your application with the -g option.

This section explains how to debug MPI applications using the listed debugger tools:


This section provides the following troubleshooting information:

  • General Intel® MPI Library troubleshooting procedures
  • Typical MPI failures with corresponding output messages and behavior when a failure occurs
  • Recommendations on potential root causes and solutions
Подписаться на Intel® MPI Library