Deep Learning Inference

After a neural network is trained, it is deployed to run inference—to classify, recognize, and process new inputs. Develop and deploy your application quickly with the lowest deterministic latency on a real-time performance platform. Simplify the acceleration of convolutional neural networks (CNN) for applications in the data center and at the edge.

Model Quantization for Production with Intel® Deep Learning Boost

Discover how the combination of model quantization and Vector Neural Network Instructions can be used in production for AI inference applications.

Learn More

Intel® FPGAs for the Data Center and at the Edge

Field programmable gate arrays (FPGAs) are customizable integrated circuits containing logic elements, DSP blocks, on-die memory, and flexible I/O. These building blocks enable the developer to implement any number of functions directly in the hardware.

 

Machine Learning on Intel® FPGAs

Take advantage of the flexibility of FPGAs to add in-line machine learning capability to any custom interface for the lowest deterministic latency and real-time inference.

Accelerate Deep Learning Development at the Edge

Free your machine learning projects from the cloud using the Intel® Neural Compute Stick 2 (Intel® NCS2). Discover how to profile, tune, compile, and deploy your neural networks with the OpenVINO™ toolkit.

 

Learn More

Frameworks in the Data Center

Design and deploy models on familiar Intel-based architecture that offers competitive performance and cost efficiency for most AI frameworks.

Inference at the Edge

OpenVINO™ Toolkit

This toolkit includes the Deep Learning Deployment Toolkit Beta, which contains everything you need to optimize your models for inference and heterogenous performance by taking advantage of execution across Intel® accelerators.