Intel® MPI Benchmarks

C_[ACTION]_expl

This pattern performs collective access from all processes to a common file, with an explicit file pointer.

This benchmark is based on the following MPI routines:

  • MPI_File_read_at_all/MPI_File_write_at_all for the blocking mode

  • MPI_File_.._at_all_begin/MPI_File_.._at_all_end for the nonblocking mode

Ialltoallv

The benchmark for MPI_Ialltoallv that measures communication and computation overlap.

Property

Description

Measured pattern

MPI_Ialltoallv/IMB_cpu_exploit/MPI_Wait

MPI data type

MPI_BYTE

Reported timings

Iscatterv

The benchmark for MPI_Iscatterv that measures communication and computation overlap.

Property

Description

Measured pattern

MPI_Iscatterv/IMB_cpu_exploit/MPI_Wait

MPI data type

MPI_BYTE

Root

One_put_all

This benchmark tests the MPI_Put operation using one active process that transfers data to all other processes. All target processes are waiting in the MPI_Barrier call while the origin process performs the transfers.

Property

Description

Measured pattern

Software Requirements

To run the Intel® MPI Benchmarks, you need:

  • cpp, ANSI C compiler, gmake on Linux* OS or Unix* OS.

  • Enclosed Microsoft Visual* C++ solutions as the basis for Microsoft Windows* OS.

  • MPI installation, including a startup mechanism for parallel MPI programs.

Reduce

The benchmark for the MPI_Reduce function. It reduces a vector of length L = X/sizeof(float) float items. The MPI data type is MPI_FLOAT. The MPI operation is MPI_SUM. The root of the operation is changed round-robin.

Property

Description

Measured pattern

Подписаться на Intel® MPI Benchmarks