Deep Learning Inference

Accelerate and deploy neural network models across Intel® platforms with a built-in model optimizer for pretrained models and an inference engine runtime for hardware-specific acceleration.

Intel® Deep Learning Deployment Toolkit

This toolkit allows developers to deploy pretrained deep learning models through a high-level C++ or Python* inference engine API integrated with application logic. It supports multiple Intel® platforms and is included in the Intel® Distribution of OpenVINO™ toolkit.

How It Works

A typical computer vision pipeline with deep learning may consist of regular vision functions (like image preprocessing) and a convolutional neural network (CNN). The CNN graphs are accelerated on the FPGA add-on card or Intel Movidius Neural Compute Sticks (NCS), while the rest of the vision pipelines run on a host processor. 

Deep Learning Workbench
This web-based graphical environment that allows users to visualize a simulation of the performance of deep learning models and datasets on various Intel® architecture configurations (CPU, GPU, VPU). It provides key performance metrics such as latency, throughput, and performance counters for each layer of the selected neural network. This tool includes simple configuration for many inference experiments to detect optimal performance settings. 

  • Run single versus multiple inferences. 
  • Calibrate to reduce precision of certain model layers from FP32 to Int8.
  • Automatically determine the optimized algorithm based on convolution layer parameters and hardware configuration with the Winograd Algorithmic Tuner.
  • Run experiments on known data sets and determine accuracy of the model after parameter tuning or calibration using the accuracy checker.

Deep Learning Workbench Developer Guide

Discover the Capabilities

Traditional Computer Vision

Develop and optimize classic computer vision applications built with the OpenCV library and other industry tools.

Hardware Acceleration

Harness the performance of Intel®-based accelerators: CPUs, iGPUs, FPGAs, VPUs, Intel® Gaussian & Neural Accelerators, and IPUs.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804