Intel® MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, version 3.1 (MPI-3.1) specification. Use the library to develop applications that can run on multiple cluster interconnects.
The Intel® MPI Library has the following features:
- High sclability
- Low overhead, enables analyzing large amounts of data
- MPI tuning utility for accelerating your applications
- Interconnect independence and flexible runtime fabric selection
Intel® MPI Library is available as a standalone product and as part of the Intel® Parallel Studio XE Cluster Edition.
The product comprises the following main components:
Runtime Environment (RTO) includes the tools you need to run programs, including the Hydra process manager, supporting utilities, shared (.so) libraries, and documentation.
Software Development Kit (SDK) includes all of the Runtime Environment components plus compilation tools, including compiler wrappers such as mpiicc, include files and modules, static (.a) libraries, debug libraries, and test codes.
Besides the SDK and RTO components, Intel® MPI Library also includes Intel® MPI Benchmarks, which enable you to measure MPI operations on various cluster architectures and MPI implementations. For details, see the Intel® MPI Benchmarks User's Guide.
Before you start using Intel® MPI Library make sure to complete the following steps:
mpivars.[c]shscript to establish the proper environment settings for the Intel® MPI Library. It is located in the
<installdir_MPI>refers to the Intel MPI Library installation directory (for example,
- Create a
hostfiletext file that lists the nodes in the cluster using one host name per line. For example:
Make sure the passwordless SSH connection is established among all nodes of the cluster. It ensures the proper communication of MPI processes among the nodes. To establish the connection, you can use the sshconnectivity.exp script located at <installdir>/parallel_studio_xe_<version>.<update>.<package>/bin.
After completing these steps, you are ready to use Intel® MPI Library.
For detailed system requirements, see the System Requirements section in Release Notes.
Building and Running MPI Programs
Compiling an MPI program
If you have the SDK component installed, you can build your MPI programs with Intel® MPI Library. Do the following:
Make sure you have a compiler in your PATH. To check this, run the which command on the desired compiler. For example:
$ which icc /opt/intel/compilers_and_libraries_2017.<update>.<package#>/linux/bin/intel64/icc
Compile a test program using the appropriate compiler wrapper. For example, for a C program:
$ mpiicc -o myprog <installdir>/test/test.c
Running an MPI program
mpirun command to run your program. Use the previously created
hostfile with the
-f option to launch the program on the specifed nodes:
$ mpirun -n <# of processes> -ppn <# of processes per node> -f ./hostfile ./myprog
The test program above produces output in the following format:
Hello world: rank 0 of 2 running on clusternode1 Hello world: rank 1 of 2 running on clusternode2
This output indicates that you properly configured your environment and Intel® MPI Library successfully ran the test MPI program on the cluster.
Intel® MPI Library has the following major features:
- MPI-1, MPI-2.2 and MPI-3.1 specification conformance
- Support for any combination of the following interconnection fabrics:
- Shared memory
- Network fabrics with tag matching capabilities through Tag Matching Interface (TMI), such as Intel® True Scale Fabric, Infiniband*, Myrinet* and other interconnects
- Native InfiniBand* interface through OFED* verbs provided by Open Fabrics Alliance* (OFA*)
- OpenFabrics Interface* (OFI*)
- RDMA-capable network fabrics through DAPL*, such as InfiniBand* and Myrinet*
- Sockets, for example, TCP/IP over Ethernet*, Gigabit Ethernet*, and other interconnects
- Support for the following MPI communication modes related to Intel® Xeon Phi™ coprocessor:
- Communication inside the Intel® Xeon Phi™ coprocessor
- Communication between the Intel® Xeon Phi™ coprocessor and the host CPU inside one node
- Communication between the Intel® Xeon Phi™ coprocessors inside one node
- Communication between the Intel® Xeon Phi™ coprocessors and host CPU between several nodes
- (SDK only) Support for Intel® 64 architecture Intel® MIC Architecture clusters using:
- Intel® C++ Compiler version 14.0 and higher
- Intel® Fortran Compiler version 14.0 and higher
- GNU* C, C++ and Fortran 95 compilers
- (SDK only) C, C++, Fortran* 77, Fortran 90 language bindings and Fortran 2008 bindings
- (SDK only) Dynamic linking
If you encounter problems when using Intel® MPI Library, go through the following general procedures to troubleshoot the problems:
- Check system requirements and known issues in Release Notes.
- Check hosts accessibility. Run a simple non-MPI application (for example,
hostnameutility) on the problem hosts with the
mpirunutility. This check helps you reveal the environmental problem (for example, SSH is not configured properly), or connectivity problem (for example, unreachable hosts).
- Run MPI application with enabled debug information. To enable the debug information, set the environment variable
I_MPI_DEBUG=6. You can also set a different debug level to get more detailed information. This action helps you find out the problem component.
See more details in the Troubleshooting section of Developer Guide.