Discover the Capabilities

High Performance, Deep Learning

Convert and optimize models to achieve high performance for deep-learning inference applications.

Streamlined Development

Facilitate a smoother development process using the included inference tools for low-precision optimization and media processing, computer vision libraries, and preoptimized kernels.

Write Once, Deploy Anywhere

Deploy your same application across combinations of host processors, accelerators, and environments, including CPUs, GPUs, VPUs, FPGAs, on-premise and on-device, and in the browser or in the cloud.

How It Works

1. BUILD

Use the Open Model Zoo to find open-source, pretrained, and preoptimized models ready for inference, or use your own deep-learning model.

Open Model Zoo

2. OPTIMIZE

Run the trained model through the Model Optimizer to convert the model to an Intermediate Representation (IR), which is represented in a pair of files (.xml and .bin). These files describe the network topology and contain the weights and biases binary data of the model.

Model Optimizer Developer Guide

3. DEPLOY

Use the Inference Engine to run inference and output results on multiple processors, accelerators, and environments with a write once, deploy anywhere efficiency.

Inference Engine Developer Guide

2021.3

What's New in the 2021.3 Release

  • Introduces Conditional Compilation, which enables a significant reduction to the binary footprint of the runtime components for particular models (available only as open source).
  • Introduces support for the 3rd generation Intel® Xeon® Scalable platform (code-named Ice Lake), which delivers advanced performance, security, efficiency, and built-in AI acceleration to handle unique workloads and more powerful AI​
  • Adds new pretrained models and support for public models to streamline development.
    • Public models include aclnet-int8 (sound_classification), deblurgan-v2 (image_processing), fastseg-small and fastseg-large (semantic segmentation) and more.​
  • Developer tools are now available as Python wheel packages using pip install openvino-dev for Windows, macOS, and Linux for more efficient package installation, upgrade, and management.

2020.3.1 LTS

What's New in the 2020.3.1 LTS Release

Long-Term Support (LTS) is a new annual release type that provides longer-term maintenance and support with a focus on stability and compatibility. This allows you to deploy applications powered by the Intel Distribution of OpenVINO toolkit with more confidence. To get the latest features and leading performance, standard releases will continue to be made available three to four times a year.

  • Provides bug fixes for the previous 2020.3 LTS release. Read more about the support details.
  • Includes security and functionality bug fixes, and minor capability changes
  • Includes improved support for 11th generation Intel® Core™ Processor (formerly code-named Tiger Lake), which includes Intel® Iris® Xe graphics and Intel DL Boost instructions.

Intel Distribution of OpenVINO toolkit 2020.3.X LTS releases will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.

For questions about next-generation programmable deep-learning solutions based on FPGAs or to get the latest FPGA updates, talk to your sales representative or contact us.

Release Notes

LTS Documentation

Awarded by the Embedded Vision Alliance*

Powered by oneAPI

The productive, smart path to freedom for accelerated computing from the economic and technical burdens of proprietary alternatives.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.