Overview of Intel® Distribution of OpenVINO™ Toolkit
AI inference applies capabilities learned after training a neural network to yield results. The Intel® Distribution of OpenVINO™ toolkit enables you to optimize, tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
Run the trained model through the Model Optimizer to convert the model to an Intermediate Representation (IR), which is represented in a pair of files (.xml and .bin). These files describe the network topology and contain the weights and biases binary data of the model.
See how developers use the Intel Distribution of OpenVINO toolkit on multiple Intel® architectures to enable new and enhanced use cases across industries, including manufacturing, health and life sciences, retail, security, and more.
This release provides functional bug fixes and minor capability changes for the previous 2021.4 Long-Term Support (LTS) release, enabling developers to deploy applications powered by the Intel Distribution of OpenVINO toolkit with confidence. To learn more about the long-term support and maintenance, see the
Note A new LTS version is released every year and is supported for two years (one year of bug fixes and two years of security patches). The LTS version is intended for developers taking the OpenVINO toolkit to production. For developers that prefer the latest features and leading performance, standard releases are recommended. Standard releases continue to be available three to four times a year.
Learn more about included components in the Release Notes. This update includes:
Specific fixes to known issues with Model Optimizer, Inference Engine (plug-ins for Inference Engine Python* API, CPU, GPU, Intel® Movidius™ Myriad™ VPU, HDDL, and Intel® Gaussian & Neural Accelerator), Deep-Learning Streamer, and Post-Training Optimization Tool
Minor capability changes and bug fixes to the Open Model Zoo
New Jupyter* Notebook tutorials simplify how to get started: