This article describes advanced techniques to tune the performance of codes built and executed using the Intel® MPI Library. The tuning methodologies presented are intended for experienced users, and their impact on overall performance depends strongly on the particular application and workload used.
This option is particularly relevant for oversubscribed MPI jobs. The goal is to enable the wait mode of the progress engine in order to wait for messages without polling the fabric(s). This can save CPU cycles but decreases the message-response rate (latency), so it should be used with caution. To enable wait mode simply use:
Adjust the eager / rendezvous protocol threshold
Two protocols control the handshake mechanism for message exchange in MPI:
The Eager protocol has lower latency, but requires more runtime memory in order to maintain the required intermediate buffers. It is commonly used for small messages. The Rendezvous protocol requires less memory, and has higher latency. It is typically used for large messages where the increased overhead is negligible.
For communication across nodes the Eager / Rendezvous threshold is defined by the variable I_MPI_EAGER_THRESHOLD (in bytes, default is 256 kB).
For communication within a node the variable I_MPI_INTRANODE_EAGER_THRESHOLD controls the switch point between both protocols (in bytes) except for the SHM fabric, and it is also set to 256 kB.
For the shm fabric the threshold control is provided by the variable I_MPI_SHM_CELL_SIZE (in bytes), whcih has a default that is platform dependent and that typically ranges between 8 kB and 64 kB.
Note that for DAPL and DAPL-UD fabrics the switch control is provided instead by the I_MPI_DAPL_DIRECT_COPY_THRESHOLD and the I_MPI_DAPL_UD_DIRE CT_COPY_THRESHOLD, which set the switch point between Eager and the direct-copy mechanisms in bytes.
It is possible to overlap computation and communication by spawning a helper thread. This can cause oversubscription, so it is disabled by default. To enable asynchronous progress:
It is also possible to explicitly pin the helper thread - taking one rank out of the pinning mask:
There are three possible types of message exchange when using shared memory:
Significant portions of CPU cycles are sometimes spent in the MPI progress engine (CH3/CH4). This can be addressed reducing the spin count, which is the number of times the progress engine spins waiting for a message or connection request before yielding to the OS:
The default setting is 250 unless more than one process runs per processor, in which case it is set to 1.
Multi-core platforms with intranode communication dominated executions may benefit from a customized value of the intranode spin count:
Start by making sure you use the latest Intel MPI library as well as the latest PSM2 version
If all ranks work on the same Intel Architecture generation, switch off the platform check:
Specify the processor architecture being used to tune the collective operations:
Alternative PMI data exchange algorithm can help to speed up the startup phase:
Customizing the branching may also help startup times (default is 32 for over 127 nodes):
Traditional Infiniband* support uses the Reliable Connection (RC) protocol to exchange MPI messages, but the User Datagram (UD) protocol has emerged as a lower memory consumption, more scalable alternative. This is the recommended DAPL mode for large scale runs.
Use DAPL UD UCM:
I_MPI_FABRICS=shm:dapl I_MPI_DAPL_UD=1 I_MPI_DAPL_UD_PROVIDER=ofa-v2-mlx5_0-1u
DAPL library UCM settings are automatically adjusted for MPI jobs of more than 1024 ranks, resulting in more stable job start-up:
|CMA||ofa-v2-ib0||Uses OFA rdma_cm to setup QP. IPoIB, ARP, and SA queries required|
|SCM||ofa-v2-mlx5_0-1||Uses sockets to exchange QP information. IPoIB, ARP and SA queries NOT required|
|UCM||ofa-v2-mlx5_0-1u||Uses UD QP to exchange QP information. Sockets, IPoIB, ARP ans SA queries NOT required|
If all ranks have the same DAPL provider (I_MPI_DAPL_UD_PROVIDER) and the same DAPL library version for all ranks, switch off the DAPL provider check using hte following environmental variable setting:
Setting IPoIB as the PMI low level transport layer may also improve the startup performance:
DAPL-UD is fairly efficient in its memory consumption, but it can be further optimized for large scale runs. The following table contains a suggested set of optimized settings to use as a starting point.
|Variable||Default Value||Tuned Value|
Besides selecting the DAPL UCM provider, the following settings may be used for large scale runs (over 512 tasks).
|DAPL_UCM_REP_TIME||2000||REQUEST timer, waiting for REPLY in milliseconds|
|DAPL_UCM_RTU_TIME||2000||REPLY timer, waiting for RTU in milliseconds|
|DAPL_UCM_CQ_SIZE||2000||CM completion queue|
|DAPL_UCM_QP_SIZE||2000||CM message queue|
|DAPL_UCM_RETRY||7||REQUEST and REPLY retries|
|DAPL_ACK_RETRY||7||RC acknowledgement retry count|
|DAPL_ACK_TIMER||20||RC acknowledgement retry timer|
Mixed UD / RDMA mode allows small messages to pass through UD, while large messages are passed through RDMA. The switch over point can be set using the following variable:
Note that RDMA is not supported for connectionless communication, but that large MPI messagescan benefit from RDMA. This mixed communication is disabled by default, but it can be enabled with:
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.