Transforming the user experience with the Deep Learning Reference Stack

By Mark L Skarpness, Published: 08/21/2019, Last Updated: 12/23/2019

Intel understands the challenges with creating and deploying applications for deep learning workloads. That’s why we developed an integrated Deep Learning Reference Stack, optimized for Intel® Xeon® Scalable processor, announced at Intel Architecture Day last December. Since then, we’ve extended Deep Learning Reference Stack capabilities and released the companion Data Analytics Reference Stack, enabling faster input, storage, and analysis of large data sets and together delivering higher performance for AI developers.

Today, we’re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

This latest release provides inference and training capabilities based on second-generation Intel® Xeon scalable platforms and features Intel® Deep Learning Boost (Intel® DL Boost). With the stack, users can deploy a pre-trained neural network model to perform speech detection, image classification, object detection, and more.

As with previous releases, this stack is highly-tuned for cloud native environments, enabling developers to quickly prototype and deploy deep learning workloads by reducing complexity typical of deep learning components. We’ve also introduced the following enhancements while maintaining the ability for developers to customize their solutions:

  • TensorFlow* 1.14, an end-to-end open source platform for machine learning (ML) that lets researchers push state-of-the-art solutions and developers easily build and deploy ML-powered applications.
  • OpenVINO™ model server version 2019_R1.1, delivering improved neural network performance on a variety of Intel® processors, helping unlock cost-effective, real-time vision applications.
  • Kubeflow Seldon*, an open platform for deploying machine learning models on Kubernetes.
  • JupyterHub, a multi-user hub that spawns, manages, and proxies multiple instances of the single-user Jupyter* notebook server.
  • Deep Learning Compilers (TVM 0.6), an end-to-end compiler stack.

As the Deep Learning Reference Stack evolves to improve the user experience, we identified three real-world use cases to focus on. Today, we are excited to share the first: Sentiment Analysis. Specifically, developers are looking for a pre-built pipeline for end-to-end implementation of sentiment analysis to reduce overhead and streamline data analytics.

Today we are delivering an end-to-end solution including the latest Deep Learning Reference Stack and the Data Analytics Reference Stack, combined with cloud technologies, that offers developers enhanced ability to analyze text datasets.

If you are attending the Open Source Summit in San Diego, stop by the Intel booth for a Sentiment Analysis demo. We plan to unveil additional use cases targeting developer, cloud service provider (CSP), and enterprise needs in the coming weeks.

Visit the Clear Linux* Stacks page to learn more and download the Data Analytics Reference Stack and the Deep Learning Reference Stack code, or contribute feedback. As always, we welcome ideas for further enhancements through the stacks mailing list.

Author

Mark Skarpness

Mark Skarpness is vice president in the Intel Architecture, Graphics and Software group and director of data-centric system stacks in System Software Products at Intel Corp. Skarpness leads software engineering for technologies including the Java* runtime, data center management, web platform technologies, and networking and storage. He also leads new software business models and integrated solutions, as well as strategic and co-engineering engagements with Linux* Operating System Vendors and cloud service providers.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804