Take advantage of the advanced memory architecture and Intel® Deep Learning Boost instructions to improve deep learning performance on the latest generation of Intel® Xeon® Scalable processors.
Use the Intel Distribution of OpenVINO toolkit to streamline and deploy high-performance deep learning inference. Enable Intel Deep Learning Boost for vision and deep learning applications.
Learn more about Intel® Math Kernel Library for Deep Neural Networks—the library at the heart of the optimizations of the main AI frameworks for deep learning.
Set up and run simplified bridge code that can be used to link TensorFlow*-based projects to preoptimized nGraph back ends for significantly better performance.
Learn to use precision changes in BigDL and how it uses Intel® Math Kernel Library for Deep Neural Networks to accelerate performance.
Find out how to accelerate MXNet* with Intel® Math Kernel Library for Deep Neural Networks by installing a CPU-optimized version and test it with basic examples.