Deep Learning Inference

Accelerate and deploy neural network models across Intel® platforms with a built-in model optimizer for pretrained models and an inference engine runtime for hardware-specific acceleration.

Intel® Deep Learning Deployment Toolkit

This toolkit allows developers to deploy pretrained deep learning models through a high-level C++ or Python* inference engine API integrated with application logic. It supports multiple Intel® platforms and is included in the Intel® Distribution of OpenVINO™ toolkit.

How It Works

A typical computer vision pipeline with deep learning may consist of regular vision functions (like image preprocessing) and a convolutional neural network (CNN). The CNN graphs are accelerated on the FPGA add-on card or Intel Movidius Neural Compute Sticks (NCS), while the rest of the vision pipelines run on a host processor. 

Inference Engine Developer Guide

Intel® FPGA Support

Intel Movidius Neural Compute Stick Quick Start Guide

Intel® Neural Compute Stick 2 Quick Start Guide

Deep Learning Workbench
This web-based graphical environment that allows users to visualize a simulation of the performance of deep learning models and datasets on various Intel® architecture configurations (CPU, GPU, VPU). It provides key performance metrics such as latency, throughput, and performance counters for each layer of the selected neural network. This tool includes simple configuration for many inference experiments to detect optimal performance settings. 

  • Run single versus multiple inferences. 
  • Calibrate to reduce precision of certain model layers from FP32 to Int8.
  • Automatically determine the optimized algorithm based on convolution layer parameters and hardware configuration with the Winograd Algorithmic Tuner.
  • Run experiments on known data sets and determine accuracy of the model after parameter tuning or calibration using the accuracy checker.

Deep Learning Workbench Developer Guide

Discover the Capabilities

Traditional Computer Vision

Develop and optimize classic computer vision applications built with the OpenCV library and other industry tools.

Hardware Acceleration

Harness the performance of Intel®-based accelerators: CPUs, iGPUs, FPGAs, VPUs, Intel® Gaussian & Neural Accelerators, and IPUs.

Informações de produto e desempenho


Os compiladores da Intel podem ou não otimizar para o mesmo nível de microprocessadores não Intel no caso de otimizações que não são exclusivas para microprocessadores Intel. Essas otimizações incluem os conjuntos de instruções SSE2, SSE3 e SSSE3, e outras otimizações. A Intel não garante a disponibilidade, a funcionalidade ou eficácia de qualquer otimização sobre microprocessadores não fabricados pela Intel. As otimizações que dependem de microprocessadores neste produto são destinadas ao uso com microprocessadores Intel. Algumas otimizações não específicas da microarquitetura Intel são reservadas para os microprocessadores Intel. Consulte os Guias de Usuário e Referência do produto aplicáveis para obter mais informações sobre os conjuntos de instruções específicos cobertos por este aviso.

Revisão do aviso #20110804