Second Generation Intel® Xeon® Scalable Processor with Built-In AI Acceleration

Take advantage of the advanced memory architecture and Intel® Deep Learning Boost instructions to improve deep learning performance on the latest generation of Intel® Xeon® Scalable processors.

Technical Overview

More Efficient Deep Learning Inference

Use the Intel Distribution of OpenVINO toolkit to streamline and deploy high-performance deep learning inference. Enable Intel Deep Learning Boost for vision and deep learning applications.

Optimize Training

Learn more about Intel® Math Kernel Library for Deep Neural Networks—the library at the heart of the optimizations of the main AI frameworks for deep learning.

Featured Stories

High-Performance TensorFlow* on Intel® Xeon® Processors Using nGraph

Set up and run simplified bridge code that can be used to link TensorFlow*-based projects to preoptimized nGraph back ends for significantly better performance.

BigDL Model Inference with Intel® Deep Learning Boost

Learn to use precision changes in BigDL and how it uses Intel® Math Kernel Library for Deep Neural Networks to accelerate performance.

Get Started with Intel® Optimization for Apache MXNet*

Find out how to accelerate MXNet* with Intel® Math Kernel Library for Deep Neural Networks by installing a CPU-optimized version and test it with basic examples.