Develop Multiplatform Computer Vision Solutions


Explore the OpenVINO™ toolkit (formerly the Intel® Computer Vision SDK)

Make your vision a reality on Intel® platforms—from smart cameras and video surveillance to robotics, transportation, and more.

Your Computer Vision Apps...Now Faster

Develop applications and solutions that emulate human vision with the Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance.

  • Enables CNN-based deep learning inference on the edge
  • Supports heterogeneous execution across computer vision accelerators—CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA—using a common API
  • Speeds time to market via a library of functions and preoptimized kernels
  • Includes optimized calls for OpenCV and OpenVX*

Get Started

Discover the Capabilities

Deep Learning for Computer Vision

Accelerate and deploy CNNs on Intel® platforms with the Intel® Deep Learning Deployment Toolkit that's available in the OpenVINO toolkit and as a stand-alone download.

Hardware Acceleration

Harness the performance of Intel®-based accelerators: CPUs, GPUs, FPGAs, VPUs, and IPUs.

Who Needs This Product

Software developers and data scientists who:

  • Work on computer vision, neural network inference, and deep learning deployment capabilities
  • Want to accelerate their solutions across multiple platforms, including CPU, GPU, VPU, and FPGA

What's New?

  • Gold release of the Intel® FPGA Deep Learning Acceleration Suite accelerates AI inferencing workloads using Intel® FPGAs that are optimized for performance, power, and cost
  • Windows* support for the Intel® Movidius™ Neural Compute Stick
  • Python* API preview that supports the inference engine
  • Open Neural Network Exchange (ONNX) Model Zoo provides initial support for eight models
  • Model Optimizer supports the Kaldi framework that imports speech models
  • Support for Gaussian mixture model and neural network accelerator enables offloading neural network inference for speech-related models
  • OpenCV 3.4.2 features an accelerated deep neural network (DNN) module

Recent Updates

Release Notes

OpenVINO™ Toolkit

Key Specifications

Not all OpenVINO toolkit features run on all combinations of processor systems. For more information, see System Requirements.

Development Platform

Processors:

6th to 8th generation Intel® Core™ and Intel® Xeon® processors

Operating systems:

  • Ubuntu* 16.04.3 LTS (64 bit)
  • CentOS* 7.4 (64 bit)
  • Windows® 10 (64 bit)

Target System Platforms

CPU:

  • 6th to 8th Generation Intel Core and Intel Xeon processors
  • Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics

Graphics:

  • 6th to 8th generation Intel Core processor with Iris® Pro graphics and Intel HD Graphics
  • 6th to 8th generation Intel Xeon processor with Iris Pro graphics and Intel HD Graphics (excluding the e5 product family, which does not have graphics)

FPGA & VPU:

  • Intel® Arria® 10 FPGA GX development kit
  • Intel Movidius Neural Compute Stick

IPU:

  • Intel® Atom™ processor E3900

Operating systems:

  • Yocto Project* version Poky Jethro v2.0.3 (64 bit)
  • Ubuntu* 16.04.3 LTS (64 bit)
  • CentOS* 7.4 (64 bit)
  • Windows® 10 (64 bit)