Develop Multiplatform Computer Vision Solutions

Explore the Intel® Distribution of OpenVINO™ toolkit

Make your vision a reality on Intel® platforms—from smart cameras and video surveillance to robotics, transportation, and more.

Your Computer Vision Apps...Now Faster

Develop applications and solutions that emulate human vision with the Intel® Distribution of OpenVINO™ toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance.

  • Enables deep learning inference at the edge
  • Supports heterogeneous execution across computer vision accelerators—CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA—using a common API
  • Speeds up time to market via a library of functions and preoptimized kernels
  • Includes optimized calls for OpenCV and OpenVX*

Get Started

Discover the Capabilities

Deep Learning for Computer Vision

Accelerate and deploy neural network models on Intel® platforms with the Deep Learning Deployment Toolkit (DLDT) that's available in the Intel Distribution of OpenVINO toolkit.

Hardware Acceleration

Harness the performance of Intel®-based accelerators: CPUs, GPUs, FPGAs, VPUs, and IPUs.

Who Needs This Product

Software developers and data scientists who:

  • Work on computer vision, neural network inference, and deep learning deployment capabilities
  • Want to accelerate their solutions across multiple platforms, including CPU, GPU, VPU, and FPGA


Medical Imaging Powered by AI

Intel teamed up with Philips to deliver high performance, efficient deep-learning inference on X-rays and computed tomography (CT) scans without the need for accelerators. The solution runs on servers powered by Intel® Xeon® Scalable processors and was optimized by Intel® Distribution of OpenVINO™ toolkit.

What's New in the 2019 R2 Release

  • Adds a new Deep Learning Workbench profiler as a preview feature. It provides visualization of key performance metrics (latency, throughput, and performance counters) for neural network topologies and their layers. This tool includes simple configuration for inference experiments including int8 calibration, accuracy check, and automatic detection of optimal performance settings.
    For more information, see Workbench Overview.
  • Supports multidevice inference with automatic load balancing across available devices to achieve higher throughput when using multiple platforms simultaneously.
  • Provides new inference-engine-core APIs that automate direct mapping to available devices and saves time by eliminating manual loading of individual plug-ins. Includes a query API to get device configuration and metrics to help determine the best platforms for deploying deep learning applications.
  • Supports a serialized FP16 intermediate representation (IR) to work uniformly across supported platforms. This helps reduce model size by two times when compared to FP32 and improves the use of available device memory and model portability. (For CPUs, the inference remains at FP32.)
  • Enables new use cases for machine translation, natural language processing, and speech processing and recognition with support for popular nonvision topologies that include:
    • GNMT
    • BERT
    • TDNN (NNet3)
    • ESPNet
  • Provides new binary distribution methods that enable quick installation with minimal to no overhead in setting up the development environment:
    • Binary files at package managers like YUM* and APT
    • Docker* images on Docker Hub*
    • .Zip and .tgz files through GitHub*"

Release Notes

Product Brief

System Requirements

Case Studies

Intel and GE* bring the power of AI to clinical diagnostic scanning and other healthcare workflows.

GeoVision sped up its facial recognition solution using Intel® System Studio and the Intel Distribution of OpenVINO toolkit.

This toolkit is the centerpiece of Agent Vi*, which provides next-generation vision technology.

NexCOBOT offers a flexible, modular robotics solution that integrates AI with machine vision using tools from Intel.

Open Source Software

The OpenVINO™ toolkit is an open-source product. It contains the Deep Learning Deployment Toolkit (DLDT) for Intel® processors (for CPUs), Intel® Processor Graphics (for GPUs), and heterogeneous support. It includes an open model zoo with pretrained models, samples, and demos.

OpenVINO™ Toolkit

GitHub* for DLDT

GitHub for Open Model Zoo