• 9.1
  • 07/08/2015
  • Public Content

Ungroup MPI Functions


 Analyze MPI process activity in your application.
To see the particular MPI functions called in the application, right-click on MPI in the Event Timeline and choose
Ungroup Group MPI
. This operation exposes the individual MPI calls.
MPI Ungrouped
After ungrouping the MPI functions, you see that the processes communicate with their direct neighbors using
MPI_Sendrecv
at the start of the iteration.
This data exchange has a disadvantage: process
i
does not exchange data with its neighbor
i+1
until the exchange between
i-1
and
i
is complete. This delay appears as a staircase resulting with the processes waiting for each other.
The
MPI_Allreduce
at the end of the iteration resynchronizes all processes; that is why this block has the reverse staircase appearance.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804