Better, Faster and More Scalable: The March To Exascale


Wednesday, November 8, 2017 9 AM PDT

Clusters continue to scale in density. And developing, tuning and scaling Message Passing Interface* (MPI*) applications is now essential―providing more nodes with more cores and more threads, all interconnected by high-speed fabric. 

According to the Exascale Computing Project, exascale supercomputers will process a quintillion (1018) calculations each second—more realistically simulating the processes involved in precision, compute-intense usages (e.g., medicine, manufacturing, and climate).

As part of the exascale race, the MPICH* source base from Argonne National Labs (which is not only the high-performance, widely portable implementation of MPI, it’s also the basis for Intel® MPI Library) has been updated.

Join us to learn:

Stay Connected

Get notified about upcoming webinars, events, and other opportunities to develop the most relevant skills for today and tomorrow.

Sign up



Product and Performance Information


Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804