Build Faster, Scalable High-Performance Computing Code

Master the performance challenges of communication and computation in high-performing applications.

OpenFabrics Interfaces (OFI) Support

This framework is among the most optimized and popular tools for exposing and exporting communication services to high-performance computing (HPC) applications. Its key components include APIs, provider libraries, kernel services, daemons, and test applications.

Intel® MPI Library uses OFI to handle all communications, enabling a more streamlined path that starts at the application code and ends with data communications. Tuning for the underlying fabric can now happen at run time through simple environment settings, including network-level features like multirail for increased bandwidth. Additionally, this support helps developers deliver optimal performance on extreme scale solutions based on Mellanox InfiniBand* and Intel® Omni-Path Architecture.

The result: increased communication throughput, reduced latency, simplified program design, and a common communication infrastructure.

 

See below for further notes and disclaimers.

Scalability

Implementing the high-performance MPI 3.1 standard on multiple fabrics, the library lets you quickly deliver maximum application performance (even if you change or upgrade to new interconnects) without requiring major modifications to the software or operating systems.

  • Scaling verified up to 262,000 processes
  • Thread safety allows you to trace hybrid multithreaded MPI applications for optimal performance on multicore and many-core Intel® architecture
  • Support for multi-endpoint communications lets an application efficiently split data communication among threads, maximizing interconnect utilization
  • Improved start scalability through the mpiexec.hydra process manager (Hydra is a process management system for starting parallel jobs. It is designed to natively work with multiple network protocols such as ssh, rsh, pbs, slurm, and sge.)

Performance & Tuning Utilities

Two additional functionalities help you achieve top performance from your applications.

Intel® MPI Benchmarks

This utility performs a set of MPI performance measurements for point-to-point and global communication operations across a range of message sizes. Run all of the supported benchmarks or specify a single executable file in the command line to get results for a particular subset.

The generated benchmark data fully characterizes:

  • Performance of a cluster system, including node performance, network latency, and throughput
  • Efficiency of the MPI implementation

User Guide

mpitune

Sometimes the library’s wide variety of default parameters aren’t delivering the highest performance. When that happens, use mpitune to adjust your cluster or application parameters, and then iteratively adjust and fine-tune the parameters until you achieve the best performance. 

 

Windows*

Linux*

Информация о продукте и производительности

1

Компиляторы Intel могут не обеспечивать для процессоров других производителей уровень оптимизации, который не является присущим только процессорам Intel. В состав этих оптимизаций входят наборы команд SSE2, SSE3 и SSSE3, а также другие оптимизации. Корпорация Intel не гарантирует доступность, функциональность или эффективность работы любых приложений оптимизации для микропроцессоров других производителей. Содержащиеся в данной продукции оптимизации, предназначены для использования с конкретными микропроцессорами Intel. Некоторые оптимизации, не относящиеся к микроархитектуре Intel, зарезервированы для микропроцессоров Intel. Пожалуйста, см. соответствующее руководство пользователя или справочные руководства для получения дополнительной информации о конкретных наборах команд, к которым относится данное уведомление.

Редакция уведомления № 20110804