Overview of Intel® Distribution of OpenVINO™ Toolkit
AI inference applies capabilities learned after training a neural network to yield results. The Intel® Distribution of OpenVINO™ toolkit enables you to optimize, tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.
Run the trained model through the Model Optimizer to convert the model to an Intermediate Representation (IR), which is represented in a pair of files (.xml and .bin). These files describe the network topology and contain the weights and biases binary data of the model.
See how developers use the Intel Distribution of OpenVINO toolkit on multiple Intel® architectures to enable new and enhanced use cases across industries, including manufacturing, health and life sciences, retail, security, and more.
Long-Term Support (LTS) release fixes bugs, and provides longer-term maintenance and support with a focus on stability and compatibility to enable developers to deploy applications powered by Intel® Distribution of OpenVINO™ toolkit with confidence. A new LTS version is released every year and supported for two years. For developers that prefer the very latest features and leading performance, standard releases will continue to be made available three to four times a year.