Intel understands the challenges that come with creating and deploying applications for deep learning-based workloads. In response, we created an integrated, highly-performant open sourced Deep Learning Reference Stack, optimized for Intel® Xeon® Scalable processor and announced the initial release at Intel Architecture Day in December.
Since then, we’ve continued exploring ways to improve stack deployment for AI developers. Today we are releasing an updated Deep Learning Reference Stack, addressing feedback and offering support for new use cases and workloads. One example is from hybrid cloud infrastructure solution provider One Convergence, Inc., whose DKube* deep learning platform enables data scientists to focus on their primary tasks without need for extensive IT expertise.
"DKube delivers a production-grade platform with an intuitive workflow and UI, enables a heterogeneous set of GPU and CPU servers, and supports seamless horizontal scale-out capabilities while providing performance benefits at outstanding cost points. Intel’s Deep Learning Reference Stack allows the DKube platform to maximize performance of data science operations," says One Convergence CEO Prasad Vellanki.
As with the initial Deep Learning Reference Stack release, this version is highly-tuned and built for cloud native environments. The update further enables developers to quickly prototype and deploy deep learning workloads to production by reducing complexity typical of deep learning components. We’ve also introduced the following enhancements—all while maintaining the flexibility for developers to customize their solutions:
With the need for increased compute performance, this update includes Intel® platform feature: Intel® Advanced Vector Extensions 512 (Intel® AVX-512). Intel AVX-512 provides instructions to accelerate performance for workloads such as scientific simulations, financial analytics, artificial intelligence, deep learning, 3D modeling and analysis, image and audio/video processing, cryptography, and data compression.
To learn more visit Clear Linux* Stacks, where you can download Deep Learning Reference Stack code, contribute feedback, and join the Clear Linux community – sign up to receive our developer stacks mailing list.
We look forward to seeing how the community uses the Deep Learning Reference Stack to incorporate deep learning into new and existing applications. Continue to send us your ideas for further enhancements.
Mark Skarpness, vice president of System Software Products and director of Data-Centric System Stacks at Intel.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804