Intel Introduces the Deep Learning Reference Stack

Published: 12/07/2018, Last Updated: 12/07/2018

It is challenging to create and deploy a Deep Learning stack orchestrated across multiple nodes, and even harder to make such a stack perform well. Building on our own experience in Deep Learning, Intel is releasing the Deep Learning Reference Stack, an integrated, highly-performant open source stack optimized for Intel® Xeon® Scalable platforms.

This open source community release is part of our effort to ensure AI developers have easy access to all of the features and functionality of the Intel platforms. The Deep Learning Reference Stack is highly-tuned and built for cloud native environments. With this release, we are enabling developers to quickly prototype by reducing the complexity associated with integrating multiple software components, while still giving users the flexibility to customize their solutions.

The Deep Learning Reference Stack from Intel includes everything needed to start development: the Clear Linux* OS optimized for Intel-based platforms, Kata Containers that take advantage of Intel® Virtualization Technology (Intel® VT) to secure container workloads, performance tuned libraries, orchestration, and the TensorFlow* Deep Learning and Machine Learning framework. This stack is built for containerization, deployed as a Docker* image for multi-node clusters or single-node bare metal infrastructures.

Intel is releasing this reference stack early in the development cycle to get community feedback, and we look forward to seeing how you will use it—using it as a blueprint, taking components or deploying the full stack.

To learn more, download the Intel Deep Learning Reference Stack code, or contribute feedback, please visit our Clear Linux Stacks page. To join the Clear Linux community, join our developer mailing list or the #clearlinux channel on IRC.

Author

Imad Sousou, Corporate Vice President General Manager, Open Source Technology Center
@imadsousou

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804