Analyzing MPI Applications

Parallel High Performance Computing (HPC) applications often rely on multi-node architectures of modern clusters. Performance tuning of such applications must involve analysis of cross-node application behavior as well as single-node performance analysis. Intel® Parallel Studio XE Cluster Edition includes such performance analysis tools as Intel Trace Analyzer and Collector and Intel VTune™ Amplifier that can provide important insights to help in this analysis. For example, message passing interface (MPI) communication hotspots, synchronization bottlenecks, load balancing and other complex issues can be investigated using Intel Trace Analyzer and Collector. At the same time, VTune Amplifier can be used to understand intra-node performance issues of MPI applications using fork-join threading through OpenMP* and Intel Threading Building Blocks (Intel TBB).

Use the VTune Amplifier for a single-node analysis including threading when you start analyzing hybrid codes that combine parallel MPI processes with threading for a more efficient exploitation of computing resources. For example, if you use the VTune Amplifier as part Intel Cluster Studio XE, you may use the Intel Trace Analyzer and Collector to identify the hottest MPI function and then use the VTune Amplifier to run the parallel MPI program distributed with n MPI ranks. VTune Amplifier helps you identify which instance of the hot function had the largest contribution to the application runtime.


The version of the Intel MPI library included with the Intel Parallel Studio XE Cluster Edition makes an important switch to use the Hydra process manager by default for mpirun. This provides high scalability across the big number of nodes.

Use these basic steps required to analyze MPI applications with the VTune Amplifier:

  1. Configure installation for MPI analysis.

  2. Configure and run MPI analysis with the VTune Amplifier.

  3. Resolve symbols for MPI modules.

  4. View collected data.

Explore additional information on MPI analysis:

Configuring Installation for MPI Analysis

For MPI application analysis on a Linux* cluster, you may enable the Per-user Hardware Event-based Sampling mode when installing the Intel Parallel Studio XE Cluster Edition. This option ensures that during the collection the VTune Amplifier collects data only for the current user. Once enabled by the administrator during the installation, this mode cannot be turned off by a regular user, which is intentional to preclude individual users from observing the performance data over the whole node including activities of other users.

After installation, you can use the respective files to set up the appropriate environment (PATH, MANPATH) in the current terminal session.

Configuring MPI Analysis with the VTune Amplifier

To collect performance data for an MPI application with the VTune Amplifier, use the following command:

$ mpirun -n <n> -l amplxe-cl -result-dir my_result -quiet -collect <analysis type> my_app [my_app_ options]


  • <n> is the number of MPI processes to be run. As a result, the VTune Amplifier creates a number of result directories per compute node in the current directory, named as my_result.<hostname1>, my_result.<hostname2>, ... my_result.<hostnameN>, encapsulating the data for all the ranks running on the node in the same directory. Using the suffix guarantees that multiple VTune Amplifier collections launched in the same directory on different nodes do not overwrite the data of each other and can work in parallel. To generate per-rank result directories instead of per-node directories, use the {mpirank} naming template in your result directory name, for example: my_result_{mpirank}.


    • For hardware event-based sampling analysis types in the driverless mode when the sampling driver is not installed, the VTune Amplifier can collect single MPI ranks only. For example:

      $ mpirun -n 1 amplxe-cl -c advanced-hotspots -r ah -- ./test.x : -n 3 ./test.x

      To run a driverless event-based sampling analysis for several processes on one node, consider using -analyze-system option, for example:

      $ mpirun -host myhost -n 11 ./a.out : -host myhost -n 1 amplxe-cl -result-dir foo -c advanced-hotspots -analyze-system ./a.out 

      But this type of analysis configuration can collect ITT API (for example, Task and Frame analysis) and rank data only for the launched process.

    • For non-MPICH based MPI implementations, use the -knob trace-mpi option to add a per-node suffix to the result directory name. Alternatively, you can also use the {hostname} clause in the result directory name.

  • -l option of the mpiexec/mpirun tools marks stdout lines with an MPI rank.

  • -quiet / -q option suppresses the diagnostic output like progress messages.

  • <analysis type> is an analysis type you run with the VTune Amplifier. To view a list of available analysis types, use amplxe-cl -help collect command.

To collect data for a subset of MPI processes in the workload, use the per-host syntax of mpirun/mpiexec* and specify different command lines to execute for different processes.

If you use Intel MPI with version 5.0.2 or later you can use the -gtool option with the Intel MPI process launcher for easier selective rank profiling:

$ mpirun -genvall -gtool "amplxe-cl -r <my_result> -collect <analysis type>:<rank_set>[=exclusive]" -n <n> <my_app> [my_app_ options]

where <rank_set> specifies a ranks range no be involved to the tool execution. Separate ranks with a comma or use the “-” symbol for a set of contiguous ranks. Use the all value to configure profiling on all the ranks. exclusive launch mode helps prevent running more than one collection per node, which can be useful for PMU-based profiling. Starting with Intel MPI version 5.0.3 you can use the node-wide clause instead of exclusive to make collection on all ranks of the nodes where the <rank_set> resides on or for all nodes in the case of all ranks. In this case, the VTune Amplifier creates a result directory per node with a host name suffix for the result directory name. This is particularly convenient for PMU-based collections in the driverless mode where there are limitations on simultaneous profiling by multiple amplxe-cl commands. For example:

$ mpirun -gtool "amplxe-cl -c advanced-hotspots -r my_dir:all=node-wide" -n 4 -ppn 2 my_mpi_app


  1. This example runs the Advanced Hotspots analysis type (based on the sampling driver), which is recommended as a starting point:

    $ mpirun -n 4 amplxe-cl -result-dir my_result -collect advanced-hotspots -- my_app [my_app_options]

  2. This example collects the Advanced Hotspots data for two out of 16 processes in the job distributed across the hosts:

    $ mpirun -host myhost1 -n 8 ./a.out : -host myhost2 -n 6 ./a.out : -host myhost2 -n 2 amplxe-cl -result-dir foo -c advanced-hotspots ./a.out

    As a result, the VTune Amplifier creates a result directory in the current directory foo.myhost2 (given that process ranks 14 and 15 were assigned to the second node in the job).

  3. As an alternative to the previous example, you can create a configuration file with the following content:

    # config.txt configuration file
    -host myhost1 -n 8 ./a.out
    -host myhost2 -n 6 ./a.out
    -host myhost2 -n 2 amplxe-cl -quiet -collect advanced-hotspots -result-dir foo ./a.out

    and run the data collection as:

    $ mpirun -configfile ./config.txt

    to achieve the same result as in the previous example: foo.myhost2 result directory is created.

  4. This example runs the Advanced Hotspot analysis for all ranks on all nodes with Intel MPI 5.0.3, or later:

    $ mpirun –gtool “amplxe-cl -r my_result -collect advanced-hotspots –analyze-system:all=node-wide” –n 16 -ppn 4 my_app [my_app_options]

  5. This example runs Advanced Hotspots analysis on 1, the first rank from 4-6 launched on node 2, rank 10:

    $ mpirun –gtool “amplxe-cl -r my_result -collect advanced-hotspots: 1,4-6,10” –n 16 -ppn 4 my_app [my_app_options]


The examples above use the mpirun command as opposed to mpiexec and mpiexec.hydra while real-world jobs might use the mpiexec* ones. mpirun is a higher-level command that dispatches to mpiexec or mpiexec.hydra depending on the current default and options passed. All the listed examples work for the mpiexec* commands as well as the mpirun command.

Resolving Symbols for MPI Modules

After data collection, the VTune Amplifier automatically finalizes the data (resolves symbols and converts them to the database). It happens on the same compute node where the command line collection was executing. So, the VTune Amplifier automatically locates binary and symbol files. In cases where you need to point to symbol files stored elsewhere, adjust the search settings using the -search-dir option:

$ mpirun -np 128 amplxe-cl -q -collect hotspots -search-dir /home/foo/syms ./a.out

Viewing Collected Data

Once the result is collected, you can open themitin the graphical or command line interface of the VTune Amplifier.

To view the results in the command line interface:

Use the -report option. To get the list of all available VTune Amplifier reports, enter amplxe-cl -help report.

To view the results in the graphical interface:

Click the menu button and select Open > Result... and browse to the required result file (*.amplxe).


You may copy a result to another system and view it there (for example, to open a result collected on a Linux* cluster on a Windows* workstation).

VTune Amplifier classifies MPI functions as system functions similar to Intel Threading Building Blocks (Intel TBB) and OpenMP* functions. This approach helps you focus on your code rather than MPI internals. You can use the VTune Amplifier GUI Call Stack Mode filter bar combo box and CLI call-stack-mode option to enable displaying the system functions and thus view and analyze the internals of the MPI implementation. The call stack mode User functions+1 is especially useful to find the MPI functions that consumed most of CPU Time (Basic Hotspots analysis) or waited the most (Locks and Waits analysis). For example, in the call chain main() -> foo() -> MPI_Bar() -> MPI_Bar_Impl() -> ..., MPI_Bar() is the actual MPI API function you use and the deeper functions are MPI implementation details. The call stack modes behave as follows:

  • The Only user functions call stack mode attributes the time spent in the MPI calls to the user function foo() so that you can see which of your functions you can change to actually improve the performance.

  • The default User functions+1 mode attributes the time spent in the MPI implementation to the top-level system function - MPI_Bar() so that you can easily see outstandingly heavy MPI calls.

  • The User/system functions mode shows the call tree without any re-attribution so that you can see where exactly in the MPI library the time was spent.


VTune Amplifier prefixes the profile version of MPI functions with P, for example: PMPI_Init.

VTune Amplifier provides Intel TBB and OpenMP support. You are recommended to use these thread-level parallel solutions in addition to MPI-style parallelism to maximize the CPU resource usage across the cluster, and to use the VTune Amplifier to analyze the performance of that level of parallelism. The MPI, OpenMP, and Intel TBB features in the VTune Amplifier are functionally independent, so all usual features of OpenMP and Intel TBB support are applicable when looking into a result collected for an MPI process. For hybrid OpenMP and MPI applications, the VTune Amplifier displays a summary table listing top MPI ranks with OpenMP metrics sorted by MPI Communication Spin time from low to high values. The lower the Communication time is, the longer a process was on a critical path of MPI application execution. For deeper analysis, explore OpenMP efficiency metrics by MPI processes laying on the critical path.


This example displays the performance report for functions and modules analyzed for Hotspots. Note that this example opens individual analysis results each of which was collected for a specific rank of MPI process (foo.14 and foo.15 ):

$ amplxe-cl -R hotspots -q -format text -r foo.14

Function Module CPU Time
-------- ------ --------
f        a.out  6.070
main     a.out  2.990

$ amplxe-cl -R hotspots -q -format text -group-by module -r foo.14

Module CPU Time
------ --------
a.out  9.060

MPI Implementations Support

You can use the VTune Amplifier to analyze both Intel MPI library implementation and other MPI implementations. But beware of the following specifics:

  • Based on the PMI_RANK or PMI_ID environment variable (whichever is set), the VTune Amplifier extends a process name with the captured rank number that is helpful to differentiate ranks in a VTune Amplifier result with multiple ranks. The process naming schema in this case is <process_name> (rank <N>). To enable detecting an MPI rank ID for MPI implementations that do not provide the environment variable, use the -trace-mpi knob.
  • For the Intel MPI library, the VTune Amplifier classifies MPI functions/modules as system functions/modules (the User functions+1 option) and attributes their time to system functions. This option may not work for all modules and functions of non-Intel MPI implementations. In this case, the VTune Amplifier may display some internal MPI functions and modules by default.
  • You may need to adjust the command line examples in this help section to work for non-Intel MPI implementations. For example, you need to adjust command lines provided for different process ranks to limit the number of processes in the job.
  • An MPI implementation needs to operate in cases when there is the VTune Amplifier process (amplxe-cl) between the launcher process ( mpirun/ mpiexec) and the application process. It means that the communication information should be passed using environment variables, as most MPI implementations do. VTune Amplifier does not work on an MPI implementation that tries to pass communication information from its immediate parent process.

MPI System Modules Recognized by the VTune Amplifier

VTune Amplifier uses the following regular expressions in the Perl syntax to classify MPI implementation modules:

  • impi\.dll

  • impid\.dll

  • impidmt\.dll

  • impil\.dll

  • impilmt\.dll

  • impimt\.dll

  • libimalloc\.dll

  • libmpi_ilp64\.dll


This list is provided for reference only. It may change from version to version without any additional notification.

Analysis Limitations

  • VTune Amplifies does not support MPI dynamic processes (for example, the MPI_Comm_spawn dynamic process API).

Additional Resources

For more details on analyzing MPI applications, see the Intel Parallel Studio XE Cluster Edition and online MPI documentation at

There are also other resources available online that discuss usage of the VTune Amplifier with the Intel MPI Library:

For more complete information about compiler optimizations, see our Optimization Notice.