Deep Learning For Computer Vision

Optimize deep learning solutions across multiple Intel® platforms—CPU, GPU, FPGA, and VPU—and accelerate convolutional neural network (CNN) workloads.

Intel® Deep Learning Deployment Toolkit

This toolkit allows developers to deploy pretrained deep learning models through a high-level C++ inference engine API integrated with application logic. It supports multiple Intel® platforms and is included in the Intel® Distribution of OpenVINO™ toolkit.

This toolkit comprises the following two components:

Model Optimizer

This Python*-based command line tool imports trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, and Apache MXNet*, and Open Neural Network Exchange (ONNX).

  • Run on both Windows* and Linux*
  • Perform analysis and adjustments for optimal execution on endpoint target devices using static, trained models
  • Serialize and adjust the model into an intermediate representation (IR) format from Intel
  • Support over 100 public models for Caffe, TensorFlow, MXNet, and ONNX.

Standard frameworks are not required when generating IR files for models consisting of standard layers. When processing custom layers in original models, the Model Optimizer provides a flexible mechanism of extensions.

Inference Engine

This execution engine uses a common API to deliver inference solutions on the platform of your choice: CPU, GPU, VPU, or FPGA.

  • Execute different layers on different targets (for example, a GPU and selected layers on a CPU)
  • Implement custom layers on a CPU while executing the remaining topology on a GPU—without having to rewrite the custom layers
  • Optimize execution (computational graph analysis, scheduling, and model compression) for target hardware with an embedded-friendly scoring solution
  • Take advantage of new asynchronous execution to improve frame-rate performance while limiting wasted cycles
  • Use a convenient C++ API to work on IR files and optimize inference

Inference Support

In addition to supporting processors with and without integrated graphics, this toolkit enables acceleration on an Intel® Programmable Acceleration Card and Intel® Movidius™ Vision Processing Unit. For the best experience, use the toolkit with the Intel® Movidius™ Neural Compute Stick and cards based on Intel® Arria® 10 FPGA GX.

Discover the Capabilities

Hardware Acceleration

Harness the performance of Intel®-based accelerators: CPUs, GPUs, FPGAs, VPUs, and IPUs.