Your AI Inferencing Apps...Now Faster

Develop applications and solutions that uses deep learning intelligence with the Intel® Distribution of OpenVINO™ toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance.

Discover the Capabilities

High Performance, Deep Learning

Accelerate deep neural network workloads across multiple platforms to achieve faster, more accurate results for AI inference.

Streamlined Development

Enable a streamlined, end-to-end development and deployment workflow.

Write Once, Deploy Anywhere

The Intel® Distribution of OpenVINO™ toolkit supports heterogeneous execution across target environments with a high-level common API.

Intel offers a powerful portfolio of scalable hardware and software solutions, powered by the Intel Distribution of OpenVINO toolkit, to meet the various performance, power, and price requirements of any use case. See how the toolkit can boost your inference applications across multiple deep neural networks with high throughput and efficiency. 

2021.1

2020.3 LTS

Awarded by the

Embedded Vision Alliance*

Ready to Get Started?


Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804