The Intel® MPI Library lets you create, maintain, and test advanced applications that performance advantages on high-performance computing (HPC) clusters based on Intel® processors.
The Intel MPI Library is available as a standalone product and as part of the Intel® oneAPI HPC Toolkit
.The Intel® MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, version 3.1 (MPI-3.1) specification. Use the library to develop applications that can run on multiple cluster interconnects.
The Intel MPI Library has the following features:
Scalability up to 340k processes
Low overhead enables analysis of large amounts of data
MPI tuning utility for accelerating your applications
Interconnect independence and flexible runtime fabric selection
The product consists of the following main components:
Compilation tools, including compiler drivers such as mpiicc and mpifort
Include files and modules
Shared (.so) and static (.a) libraries, debug libraries, and interface libraries
Process Manager and tools to run programs
Documentation provided as a separate package or available from the Intel Developer Zone
The Intel MPI Library has the following major features:
MPI-1, MPI-2.2 and MPI-3.1 specification conformance
C, C++, Fortran* 77, Fortran 90, and Fortran 2008 language bindings
Before you start using Intel MPI Library, make sure to complete the following steps:
1. Source the
script to set the environment variables for the Intel MPI Library. The script is located in the installation directory (by default,
2. Create a
text file that lists the nodes in the cluster using one host name per line. For example:
3. Make sure the passwordless SSH connection is established among all nodes of the cluster. It ensures the proper communication of MPI processes among the nodes.
After completing these steps, you are ready to use the Intel MPI Library.
For detailed system requirements, see the “System Requirements” section in Release Notes
Building and Running MPI Programs
1. Make sure you have a compiler in your PATH. To check this, run the
command on the desired compiler. For example:
$ which icc/opt/intel/oneapi/compiler/<
2. Compile a test program using the appropriate compiler driver. For example:
$ mpiicc -o myprog <
Use the previously created hostfile and run your program with the
command as follows:
$ mpirun -n <
# of processes
> -ppn <
# of processes per node
> -f ./hostfile ./myprog
$ mpirun -n 2 -ppn 1 -f ./hostfile ./myprog
The test program above produces output in the following format:
Hello world: rank 0 of 2 running on clusternode1Hello world: rank 1 of 2 running on clusternode2
This output indicates that you properly configured your environment and Intel MPI Library successfully ran the test MPI program on the cluster.
If you encounter problems when using Intel MPI Library, go through the following general procedures to troubleshoot them:
Check system requirements, known issues and limitations in the
Check hosts accessibility. Run a simple non-MPI application (for example,
) on the problem hosts with
. This check helps you reveal the environmental problem (for example, SSH is not configured properly), or connectivity problem (for example, unreachable hosts).
Run the MPI application with debug information enabled. To enable the debug information, set the environment variable I_MPI_DEBUG=6. You can also set a different debug level to get more detailed information. This action helps you find out the problem component.