Transforming the user experience with the Deep Learning Reference Stack

Intel understands the challenges with creating and deploying applications for deep learning workloads. That’s why we developed an integrated Deep Learning Reference Stack, optimized for Intel® Xeon® Scalable processor, announced at Intel® Architecture Day last December. Since then, we’ve extended Deep Learning Reference Stack capabilities and released the companion Data Analytics Reference Stack, enabling faster input, storage, and analysis of large data sets and together delivering higher performance for AI developers.

Today, we’re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

This latest release provides inference and training capabilities based on second-generation Intel® Xeon scalable platforms and features Intel® Deep Learning Boost (Intel® DL Boost). With the stack, users can deploy a pre-trained neural network model to perform speech detection, image classification, object detection, and more.

As with previous releases, this stack is highly-tuned for cloud native environments, enabling developers to quickly prototype and deploy deep learning workloads by reducing complexity typical of deep learning components. We’ve also introduced the following enhancements while maintaining the ability for developers to customize their solutions:

  • TensorFlow* 1.14, an end-to-end open source platform for machine learning (ML) that lets researchers push state-of-the-art solutions and developers easily build and deploy ML-powered applications.
  • OpenVINO™ model server version 2019_R1.1, delivering improved neural network performance on a variety of Intel processors, helping unlock cost-effective, real-time vision applications.
  • Kubeflow Seldon*, an open platform for deploying machine learning models on Kubernetes
  • Jupyter Hub*, a multi-user Hub that spawns, manages, and proxies multiple instances of the single-user Jupyter notebook server.
  • Deep Learning Compilers (TVM 0.6), an end-to-end compiler stack.

As the Deep Learning Reference Stack evolves to improve the user experience, we identified three real-world use cases to focus on. Today, we are excited to share the first: Sentiment Analysis. Specifically, developers are looking for a pre-built pipeline for end-to-end implementation of sentiment analysis to reduce overhead and streamline data analytics.

Today we are delivering an end-to-end solution including the latest Deep Learning Reference Stack and the Data Analytics Reference Stack, combined with cloud technologies, that offers developers enhanced ability to analyze text datasets.

If you are attending the Open Source Summit in San Diego, stop by the Intel booth for a Sentiment Analysis demo. We plan to unveil additional use cases targeting developer, Cloud Service Provider (CSP), and enterprise needs in the coming weeks.

Visit the Clear Linux* Stacks page to learn more and download the Data Analytics Reference Stack and the Deep Learning Reference Stack code, contribute feedback, or follow the #intel-verticals channel on IRC. As always, we welcome ideas for further enhancements through the stacks mailing list.


Mark Skarpness

Mark Skarpness is vice president in the Intel Architecture, Graphics and Software group and director of data-centric system stacks in System Software Products at Intel Corp. Skarpness leads software engineering for technologies including the Java* runtime, data center management, web platform technologies, and networking and storage. He also leads new software business models and integrated solutions, as well as strategic and co-engineering engagements with Linux Operating System Vendors and cloud service providers.

For more complete information about compiler optimizations, see our Optimization Notice.