Learning Lab

Using Multiple DAPL* Providers with the Intel® MPI Library


If your MPI program sends messages of drastically different sizes (for example, some 16 byte messages, and some 4 megabyte messages), you want optimum performance at all message sizes.  This cannot easily be obtained with a single DAPL* provider.  This is due to latency being a major factor for smaller messages, but bandwidth being more important for larger messages, and providers often making a tradeoff to improve one or the other.  The Intel® MPI Library, as of Version 4.1 Update 1, now supports up to three providers for a single rank of your job.

  • Developers
  • Linux*
  • Server
  • Advanced
  • Intel® MPI Library
  • Message Passing Interface
  • dapl
  • Cluster Computing
  • Intel® Many Integrated Core Architecture
  • Subscribe to Learning Lab