Overview of Intel® Distribution of OpenVINO™ Toolkit
AI inference applies capabilities learned after training a neural network to yield results. The Intel® Distribution of OpenVINO™ toolkit enables you to optimize, tune, and run comprehensive AI inference using the included model optimizer and runtime and development tools.

Discover the Capabilities
High Performance, Deep Learning
Convert and optimize models to achieve high performance for deep-learning inference applications.
Streamlined Development
Facilitate a smoother development process using the included inference tools for low-precision optimization and media processing, computer vision libraries, and preoptimized kernels.
Write Once, Deploy Anywhere
Deploy your same application across combinations of host processors, accelerators, and environments, including CPUs, GPUs, VPUs, FPGAs, on-premise and on-device, and in the browser or in the cloud.
How It Works
1. BUILD
2. OPTIMIZE
3. DEPLOY
What You Can Do
See how developers use the Intel Distribution of OpenVINO toolkit on multiple Intel® architectures to enable new and enhanced use cases across industries, including manufacturing, health and life sciences, retail, security, and more.
2021.3
2020.3.2 LTS
Community and Support
Explore different ways to get involved and stay up-to-date with the latest announcements.
Awarded by the Embedded Vision Alliance*
The productive, smart path to freedom for accelerated computing from the economic and technical burdens of proprietary alternatives.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.