Intel® oneAPI AI Analytics Toolkit

Achieve End-to-End Performance for AI Workloads

In the News

CERN Uses Intel® Deep Learning Boost & oneAPI to Juice Inference without Accuracy Loss

Researchers at CERN and Intel showcase promising results with low-precision optimizations that exploit heterogeneous operations on CPUs for convolutional Generative Adversarial Networks (GAN).

Learn More

LAIKA Studios & Intel Join Forces to Expand the Possibilities in Stop-Motion Film Making

See how LAIKA Studios and Intel’s Applied Machine Learning team used tools from the Intel oneAPI AI Analytics Toolkit to realize the limitless scope of stop-motion animation.

Learn More

Accelerate PyTorch* with oneAPI Libraries

Harnessing Intel® Deep Learning Boost and oneAPI libraries, Intel and Facebook collaboratively improved PyTorch CPU performance across multiple training and inference workloads.

PyTorch with oneDNN

PyTorch with oneCCL

MLPerf Results for Deep Learning Training and Inference

Reflecting the broad range of AI workloads, Intel submitted results for MLPerf Release v.0.7 for training and inference. Results in each use case demonstrated that Intel continues to improve standards for Intel® Xeon® Scalable processors as universal platforms for CPU-based machine learning training and inference.

MLPerf Training | MLPerf Inference

An Open Road to Swift DataFrame Scaling

This podcast looks at the challenges of data preprocessing, especially time-consuming, data-wrangling tasks. It discusses how Intel and Omnisci are collaborating to provide integrated solutions that improve dataframe scaling.


Accelerate Kaggle Challenges Using Intel oneAPI AI Analytics Toolkit

The version of scikit-learn optimized by Intel improves the performance of real-world workloads with little to no code modification and demonstrates it through notebooks published on Kaggle. Most of the workloads show promising double-digit speed increases with some as high as 227 times over the baseline.

Learn More


Optimize Performance of Gradient Boost Algorithms

Intel has been constantly improving training and inference performance for XGBoost algorithms. The following blogs compare the training performance of XGBoost 1.1 on a CPU with third-party GPUs, and showcase how to speed up inference with minimal code changes and no loss of quality.

Training | Inference

Intel AI-Based Solution Helps Accelerate Diagnosis of Lung Diseases

ACCRAD developed an AI-powered solution called CheXRad to rapidly detect COVID-19 and 14 other thoracic diseases in the clinics and hospitals of Africa. With the help of Intel, they were able to train, optimize, and deploy in less time and at a lower operational cost than available alternatives.

Learn More

Product and Performance Information


Performance varies by use, configuration and other factors. Learn more at