Better, Faster and More Scalable: The March To Exascale

  • Overview

Wednesday, November 8, 2017 9 AM PDT

Clusters continue to scale in density. And developing, tuning and scaling Message Passing Interface* (MPI*) applications is now essential―providing more nodes with more cores and more threads, all interconnected by high-speed fabric. 

According to the Exascale Computing Project, exascale supercomputers will process a quintillion (1018) calculations each second—more realistically simulating the processes involved in precision, compute-intense usages (e.g., medicine, manufacturing, and climate).

As part of the exascale race, the MPICH* source base from Argonne National Labs (which is not only the high-performance, widely portable implementation of MPI, it’s also the basis for Intel® MPI Library) has been updated.

Join us to learn:

Stay Connected

Get notified about upcoming webinars, events, and other opportunities to develop the most relevant skills for today and tomorrow.

Sign up

Benchmark results were obtained prior to the implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown". Implementation of these updates may make these results inapplicable to your device or system.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information, see Performance Benchmark Test Disclosure.