Overview of Intel® Distribution of OpenVINO™ Toolkit
AI inference applies capabilities learned after training a neural network to yield results. The Intel® Distribution of OpenVINO™ toolkit enables you to optimize, tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
Run the trained model through the Model Optimizer to convert the model to an Intermediate Representation (IR), which is represented in a pair of files (.xml and .bin). These files describe the network topology and contain the weights and biases binary data of the model.
See how developers use the Intel Distribution of OpenVINO toolkit on multiple Intel® architectures to enable new and enhanced use cases across industries, including manufacturing, health and life sciences, retail, security, and more.
Introduces Conditional Compilation, which enables a significant reduction to the binary footprint of the runtime components for particular models (available only as open source).
Introduces support for the 3rd generation Intel® Xeon® Scalable platform (code-named Ice Lake), which delivers advanced performance, security, efficiency, and built-in AI acceleration to handle unique workloads and more powerful AI
Adds new pretrained models and support for public models to streamline development.
Public models include aclnet-int8 (sound_classification), deblurgan-v2 (image_processing), fastseg-small and fastseg-large (semantic segmentation) and more.
Developer tools are now available as Python wheel packages using pip install openvino-dev for Windows, macOS, and Linux for more efficient package installation, upgrade, and management.
Long-Term Support (LTS) is a new annual release type that provides longer-term maintenance and support with a focus on stability and compatibility. This release type allows you to deploy applications powered by the Intel Distribution of OpenVINO toolkit with more confidence. To get the latest features and leading performance, standard releases will continue to be made available three to four times a year.
Provides bug fixes for the previous 2020.3.1 LTS release. Read more about the support details.
Includes security and functionality bug fixes, and minor capability changes.
Includes bug fixes for the Inference Engine MYRIAD, HDDL and FPGA plugins, and the Deep Learning Workbench.
Note Intel Distribution of OpenVINO toolkit 2020.3.X LTS releases will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.