Write Once, Deploy Anywhere
The Intel® Distribution of OpenVINO™ toolkit supports heterogeneous execution across target environments.
Inference Tools
Inference Engine
This high-level inference API includes an interface that is implemented as dynamically loaded plugins for each target hardware type. The Inference Engine delivers optimal performance for each type without requiring the implementation and maintenance of multiple code pathways.
Optimization Tools
Take advantage of a portfolio of scalable hardware solutions to meet the various performance, power, and price requirements of any use case, powered by the Intel Distribution of OpenVINO toolkit. In addition, use the Deployment Manager to generate an optimal, minimized runtime package for deployment.
Test in Remote Environments
Get access to developer sandboxes to remotely run code on Intel® hardware, and discover capabilities by learning and building with Jupyter Notebook* tutorials.
CPUs
These CPUs offer the most universal option for computer vision tasks. With multiple product lines to choose from, you can find a range of price and performance options to meet your application and budget needs.
Integrated GPUs
Many Intel® processors contain integrated graphics, including Intel® HD Graphics and Intel® UHD Graphics. The GPUs have a range of general-use and fixed-function capabilities (including Intel® Quick Sync Video) that can be used to accelerate media, inference, and general computer vision operations.
Intel® FPGAs
Gain cost savings and revenue growth from integrated circuits that retrieve and classify data in real time. Use these accelerators for AI inferencing as a low-latency solution for safer and interactive experiences that can be applied to autonomous vehicles, robotics, IoT, and data centers.
Intel® Movidius™ Vision Processing Unit (VPU)
This unit enables visual intelligence at a high compute per watt. It supports camera processing, computer vision, and deep learning inferences.
Intel® Vision Accelerator Design
Deploy power-efficient deep neural network inference for fast, accurate video analytics and computer vision applications.
Intel® Gaussian & Neural Accelerator
This accelerator is a low-power neural coprocessor for continuous inference at the edge. It is designed for offloading continuous inference workloads including but not limited to noise reduction or speech recognition to save power and free CPU resources.