Intel Introduces the Deep Learning Reference Stack

Published: 12/07/2018   Last Updated: 12/07/2018

It is challenging to create and deploy a Deep Learning stack orchestrated across multiple nodes, and even harder to make such a stack perform well. Building on our own experience in Deep Learning, Intel is releasing the Deep Learning Reference Stack, an integrated, highly-performant open source stack optimized for Intel® Xeon® Scalable platforms.

This open source community release is part of our effort to ensure AI developers have easy access to all of the features and functionality of the Intel platforms. The Deep Learning Reference Stack is highly-tuned and built for cloud native environments. With this release, we are enabling developers to quickly prototype by reducing the complexity associated with integrating multiple software components, while still giving users the flexibility to customize their solutions.

The Deep Learning Reference Stack from Intel includes everything needed to start development: the Clear Linux* OS optimized for Intel-based platforms, Kata Containers that take advantage of Intel® Virtualization Technology (Intel® VT) to secure container workloads, performance tuned libraries, orchestration, and the TensorFlow* Deep Learning and Machine Learning framework. This stack is built for containerization, deployed as a Docker* image for multi-node clusters or single-node bare metal infrastructures.

Intel is releasing this reference stack early in the development cycle to get community feedback, and we look forward to seeing how you will use it—using it as a blueprint, taking components or deploying the full stack.

To learn more, download the Intel Deep Learning Reference Stack code, or contribute feedback, please visit our Clear Linux Stacks page. To join the Clear Linux community, join our developer mailing list or the #clearlinux channel on IRC.


Imad Sousou, Corporate Vice President General Manager, Open Source Technology Center

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at