Build Faster, Scalable High-Performance Computing Code

Master the performance challenges of communication and computation in high-performing applications.

OpenFabrics Interfaces (OFI) Support

This framework is among the most optimized and popular tools for exposing and exporting communication services to high-performance computing (HPC) applications. Its key components include APIs, provider libraries, kernel services, daemons, and test applications.

Intel® MPI Library uses OFI to handle all communications, enabling a more streamlined path that starts at the application code and ends with data communications. Tuning for the underlying fabric can now happen at run time through simple environment settings, including network-level features like multirail for increased bandwidth. Additionally, this support helps developers deliver optimal performance on exascale solutions based on Intel® Omni-Path Architecture.

The result: increased communication throughput, reduced latency, simplified program design, and a common communication infrastructure.
 

See below for further notes and disclaimers.1


 

Scalability

Implementing the high-performance MPI 3.1 standard on multiple fabrics, the library lets you quickly deliver maximum application performance (even if you change or upgrade to new interconnects) without requiring major modifications to the software or operating systems.

  • Scaling verified up to 262,000 processes
  • Thread safety allows you to trace hybrid multithreaded MPI applications for optimal performance on multicore and many-core Intel® architecture
  • Support for multi-endpoint communications lets an application efficiently split data communication among threads, maximizing interconnect utilization
  • Improved start scalability through the mpiexec.hydra process manager (Hydra is a process management system for starting parallel jobs. It is designed to natively work with multiple network protocols such as ssh, rsh, pbs, slurm, and sge.)

Runtime Environment Kit

The runtime package includes everything you need to run applications based on the Intel MPI Library. The package is available at no cost for customers who have applications enabled with the Intel MPI Library. It includes the full install and runtime scripts.

Windows*

Linux*

Interconnect Independence

Whether you need to run Transmission Control Protocol (TCP) sockets, shared memory, or one of many interconnects based on Remote Direct Memory Access (RDMA)—including Ethernet and InfiniBand*—Intel MPI Library covers all configurations by providing an accelerated, universal, multifabric layer for fast interconnects via OFI.

Intel MPI Library dynamically establishes the connection, but only when needed, which reduces the memory footprint. It also automatically chooses the fastest transport available.

  • Develop MPI code independent of the fabric, knowing it will run efficiently on whatever network you choose at run time.
  • Use a two-phase communication buffer enlargement capability to allocate only the memory space required.

Application Binary Interface Compatibility

An application binary interface (ABI) is the low-level nexus between two program modules. It determines how functions are called and also the size, layout, and alignment of data types. With ABI compatibility, applications conform to the same set of runtime naming conventions.

Intel MPI Library offers ABI compatibility with existing MPI-1.x and MPI-2.x applications. So even if you’re not ready to move to the new 3.1 standard, you can take advantage of the library’s performance improvements without recompiling, and also use its runtimes.

Performance results are based on testing as of 4 September, 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information, see Performance Benchmark Test Disclosure.

Testing by Intel as of 4 September 2018. Configuration: Hardware: Intel® Xeon® Gold 6148 CPU @ 2.40GHz; 192 GB RAM. Interconnect: Intel® Omni-Path Host Fabric Interface. Software: RHEL* 7.4; IFS 10.7.0.0.145; Libfabric internal; Intel® MPI Library 2019; Intel® MPI Benchmarks 2019 (built with Intel® C++ Compiler XE 18.0.2.199 for Linux*)

Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804