Intel® MPI Library

Native and IPM Statistics

The statistics in each supported format can be collected separately. To collect statistics in all formats with the maximal level of details, use the I_MPI_STATS environment variable as follows:

I_MPI_STATS=all

Note

The I_MPI_STATS_SCOPE environment variable is not applicable when both types of statistics are collected.

Interoperability with OpenMP API

I_MPI_PIN_DOMAIN

Intel® MPI Library provides an additional environment variable to control process pinning for hybrid MPI/OpenMP* applications. This environment variable is used to define a number of non-overlapping subsets (domains) of logical processors on a node, and a set of rules on how MPI processes are bound to these domains by the following formula: one MPI process per one domain. See the picture below.

Cross-OS Launch Mode

Intel® MPI Library provides support for the heterogeneous Windows*-Linux* environment. This means that you can run MPI programs on nodes that operate on Windows and Linux OS as single MPI jobs, using the Hydra process manager.

To run a mixed Linux-Windows MPI job, do the following:

  1. Make sure the Intel MPI Library is installed and operable, and the product versions match on all nodes.

  2. On the Windows hosts, make sure the Hydra service is running:

Installing Intel® MPI Library

If you have a previous version of the Intel® MPI Library for Windows* OS installed, you do not need to uninstall it before installing a newer version.

To install Intel MPI Library, double-click on the distribution file w_mpi_p_<version>.<package_num>.exe (complete product), and w_mpi-rt_p_<version>.<package_num>.exe (RTO component only).

Interoperability with OpenMP API

I_MPI_PIN_DOMAIN

Intel® MPI Library provides an additional environment variable to control process pinning for hybrid MPI/OpenMP* applications. This environment variable is used to define a number of non-overlapping subsets (domains) of logical processors on a node, and a set of rules on how MPI processes are bound to these domains by the following formula: one MPI process per one domain. See the picture below.

DDT* Debugger

You can debug MPI applications using the Allinea* DDT* debugger. Intel does not provide support for this debugger, you should obtain the support from Allinea*. According to the DDT documentation, DDT supports the Express Launch feature for the Intel® MPI Library. You can debug your application as follows:

$ ddt mpirun -n <# of processes> [<other mpirun arguments>] <executable>

If you have issues with the DDT debugger, refer to the DDT documentation for help.

Using -gtool for Debugging

The -gtool runtime option can help you with debugging, when attaching to several processes at once. Instead of attaching to each process individually, you can specify all the processes in a single command line. For example:

$ mpirun -n 16 -gtool "gdb:3,5,7-9=attach" ./myprog

The command line above attaches the GNU* Debugger (GDB*) to processes 3, 5, 7, 8 and 9.

IPM Statistics

To enable the integrated performance monitoring (IPM) statistics collection, set I_MPI_STATS to ipm or ipm:terse.

The I_MPI_STATS_BUCKETS environment variable is not applicable for the IPM format. The I_MPI_STATS_ACCURACY environment variable is available to control extra functionality.

I_MPI_STATS

Control the statistics data output format.

Subscribe to Intel® MPI Library