DEEP LEARNING INFERENCE

After a neural network is trained, it is deployed to run inference—to classify, recognize, and process new inputs.

Develop and deploy your application quickly with the lowest deterministic latency on a real-time performance platform. Simplify the acceleration of convolutional neural networks (CNN) for applications in the data center and at the edge.

Accelerate Deep Learning
Development At The Edge

 
Free your machine learning projects from the cloud using the Movidius™ Neural Compute Stick (NCS). Learn how to profile, tune, compile, and deploy your neural networks with the Movidius™ Neural Compute SDK.

Introduction

Learn More

Intel® FPGAs for the Data Center and at the Edge

Field programmable gate arrays (FPGAs) are customizable integrated circuits containing logic elements, DSP blocks, on-die memory, and flexible I/O. These building blocks enable the developer to implement any number of functions directly in the hardware.

Machine Learning on Intel® FPGAs

Take advantage of the flexibility of FPGAs to add in-line machine learning capability to any custom interface for the lowest deterministic latency and real-time inference.

Frameworks in the Data Center

Design and deploy models on familiar Intel-based architecture that offers competitive performance
and cost efficiency for most AI frameworks.

Inference at the Edge

OpenVINO™ Toolkit

This toolkit includes the Deep Learning Deployment Toolkit Beta, which contains everything you need to optimize your models for inference and heterogenous performance by taking advantage of execution across Intel® accelerators.

Accelerate Deep Learning Inference

Intel® Processor Graphics provides a good solution to accelerate deep learning workloads. Learn about the Deep Learning Deployment Toolkit Beta that is available to help developers deliver AI enabled products to market.