We’re on the forefront of converging artificial intelligence (AI), analytics, simulation and modeling, and other high-performance computing (HPC) workloads that will drive the industry toward the next era in supercomputing.
Intel’s HPC platforms deliver exceptional performance and scale with industry-first innovations in memory, computational performance, fabric and storage. In fact, a record 95 percent of all systems in the February 2018 Top500 list of the world’s leading supercomputers are powered by Intel® Xeon® processors. And next-generation Intel® Omni-Path Architecture (Intel® OPA200) coming in 2019 will feature data rate speeds up to 200 Gb/second, doubling current performance levels.
Those high-performance/low-latency capabilities at scale provide system architects the ability to use tens of thousands of nodes, while visualization allows systems to deliver greater insights with faster turnaround when using large-scale data sets.
Today’s HPC challenges stem from burgeoning scale, greater heterogeneity, mounting energy costs and explosive new uses in AI and commercial analytics. With all of that growing complexity, it is not surprising that a recent IDC survey of HPC users1 found that the software stack is the biggest pain point of any technical category—specifically, updating and using system software, including operating systems and middleware.
Programmers and system administrators need more automation and intelligence built-in to the HPC software stack, but the growing scope and rate of change makes it extremely difficult for individual OEMs or other vendors to keep up.
Fortunately for the developer community, the Linux* model maps well to addressing these challenges: an open source software (OSS) community that contributes source code upstream, and specialized downstream distributors that support specific customer needs. The OSS model helps the community:
The OSS community represents the most effective way to advance the software stack beyond the limitations of any one vendor.
Several open source HPC software components are used by the HPC community today: Open MPI, OpenSFS, OpenFOAM, OpenStack, and others. And nearly all major OEMs are participants in the OpenHPC community, as are many key HPC independent software vendors (ISVs). Dozens of the world’s top HPC user sites have also joined this collaboration as participants.
As the leading contributor to the Linux* kernel and many OSS projects and initiatives, Intel has been a catalyst for the OSS community in general, and specifically for the widely-supported OpenHPC initiative. Intel donated hardware to the Texas Advanced Computing Center (TACC) to build a continuous integration environment for developing OpenHPC open source versions. This build-and-test environment integrates new elements into the foundational layer of OpenHPC and uses the OpenHPC test suite to ensure that new elements are compliant and performant.
This collaborative development and testing model helps advance the ease-of-use of HPC systems, making them more productive and useful in a broader scope of uses, such as commercial analytics. By addressing these requirements in the software stack, Intel has paved the way for broader adoption of HPC for high performance data analytics (HPDA)—a market IDC forecasts to grow at about three times the rate of the overall HPC market through 20201 —and for AI solutions that promise to solve some of the biggest challenges in healthcare, scientific discovery and industrial innovation.
Intel’s HPC platform supports these and other HPC solutions with Intel Xeon Scalable processors, Intel® Omni-Path interconnect fabric and other elements. Intel® Select Solutions for HPC offer tested and verified configurations specifically designed for use cases such as professional visualization and simulation & modeling.
In her recent article for Data Center Knowledge, Intel’s general manager of Rack Scale Design, Dr. Figen Ülgen noted that, “…many aspects of model training—a foundational tool for many deep learning and AI use cases—require the computational horsepower and throughput of HPC.” But she points out that “…for many data scientists and domain experts who want to incorporate AI capabilities into their business or research projects, HPC resources are too difficult to access.”
That was a key motivation for Dr. Ülgen to get involved with the establishment of OpenHPC. Since its launch by the Linux Foundation in November 2015, OpenHPC has grown to include more than 30 organizations working together on an open, vendor-neutral software stack for HPC infrastructure.
OpenHPC fosters innovation as a virtual operating system for HPC software on multiple platforms running Linux. The OpenHPC repository offers pre-built software ingredients, as well as validated recipes. The stack includes Linux operating systems, I/O libraries and services, numerical and scientific libraries, provisioning and management tools, and other elements that reduce the work of creating and maintaining HPC software systems while exposing the parallel power of the hardware.
As Dr. Ülgen concludes, “This is an exciting time for the AI and HPC communities. By embracing HPC’s power to advance AI and by using the OpenHPC stack to facilitate access to HPC resources, we can help create a smarter world and realize the vision of the AI revolution.”
At the Supercomputing 2018 conference (SC18), being held Nov. 11-15, 2018, in Dallas, Intel will showcase the convergence of AI and HPC on Intel Xeon Scalable processor-based systems and other Intel technologies in the following demos:
In addition, Intel HPC experts and AI specialists will be giving technical talks with our ecosystem partners on a variety of HPC and AI-related topics in the Intel booth theater (#3223) throughout the conference. Our experts will also be featured in more than 50 paper presentations, workshops, tutorials and Birds of a Feather discussions (full list of sessions available here). Be sure to catch the following AI and open source sessions:
|Advanced OpenMP: Host Performance and 5.0 Features||Tutorial|
|Anatomy of High-Performance Deep Learning Convolutions on SIMD Architectures||Paper|
|Auto-Tuning TensorFlow Threading Model for CPU Backend||Workshop|
|CosmoFlow: Using Deep Learning to Learn the Universe at Scale||Paper|
|Large Minibatch Training on Supercomputers with Improved Accuracy and Reduced Time to Train||Workshop|
|Mastering Tasking with OpenMP||Tutorial|
|OpenHPC Community BoF||Birds of a Feather|
|OpenMP Common Core: a “Hands-On” Exploration||Tutorial|
|OpenMP® 5.0 Is Here: Find Out All the Things You Need to Know About It!||Birds of a Feather|
|Programming Your GPU with OpenMP: A Hands-On Introduction||Tutorial|
And, join us at the One Intel Station developer community event taking place Nov. 12-14 at the historical Union Station, just a five-minute walk from the conference. Experience demos, tutorials and technical sessions, as well as free refreshments all day long. No registration is required.
We look forward to meeting with you at SC18 and discussing how the latest open source and AI software development tools for HPC can accelerate your path to deeper discovery and insights. For more information on Intel® HPC platform software and Intel Select Solutions for HPC, click the links below:
Editor's Note: Reese Baird contributed to this article.
1 From IDC Technology Spotlight June 2016: The Open HPC Stack Initiative Hits a Milestone
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804