Accelerate Computer Vision & Deep Learning with OpenVINO™ toolkit

OpenVINO toolkit edge to cloud

New Toolkit Helps Developers Streamline Deep Learning Inference & Deployment

Just released Aug. 22, 2018 - OpenVINO™ toolkit 2018 R3

Free - Download Now
Deep Learning Enhancements
  • Support for all public models on Open Neural Network Exchange (ONNX) Model Zoo*.
  • Three more pre-built algorithms: Emotions Recognition, Person Re-identification and Crossroad Object Detection in the Computer Vision Algorithms component.
  • Four new Intel pre-trained models targeting smart classroom use cases (a total of 24 pre-trained models are included in the download package).
  • Support for TensorFlow Model Zoo* object detection models by the Intel® Deep Learning Deployment Toolkit. Deploy them using OpenVINO toolkit to achieve increased performance.
  • Model Optimizer support was extended to include Graph freeze and Graph summarize, and introduces general support for dynamic input freezing (via command line), which helps to deploy models like FaceNet*.
  • Inference Engine includes a preview of image pre-processing capability (resize and crop) and static shape inferencing. 
Traditional Computer Vision

Improved performance with OpenCV* 3.4.3 with initial Intel® Advanced Vector Extensions 2 (Intel® AVX2) support via universal intrinsics and GStreamer* support as a multimedia backend on Linux*. 

Hardware Support

Adds Multi-card support for Intel® FPGAs. Boost deep learning performance further by adding multiple FPGAs to your solution. 

About OpenVINO toolkit

Part of Intel's Vision Products, Intel announces the new OpenVINO™ toolkit (Open Visual Inference and Neural Network Optimization, - formerly the Intel® Computer Vision SDK) to help developers bring vision intelligence into their applications from edge to cloud.

The toolkit is a free download that helps fast-track development of high-performance computer vision and deep learning inference solutions, and deliver fast and efficient deep learning workloads across multiple types of Intel® platforms (CPU, CPU with integrated graphics (Intel® Processor Graphics/GPU), FPGA, and Movidius vision processing units (VPUs). Vision systems hold incredible promise to change the world and help us solve problems—whether they’re making homes safer or discovering new medical cures—affording great opportunities for developers.

FREE Download

What's Inside the OpenVINO™ toolkit 

The toolkit has a common API and is based on common development standards, such as OpenCL™, OpenCV and OpenVX.

what's inside openvino toolkit

It includes:

Intel® Deep Learning Deployment Toolkit, which has:

  • A Model Optimizer to import trained models from various frameworks (like Caffe*, TensorFlow*, MXNet*), optimize topologies and convert them to a unified intermediate representation file
  • An Inference Engine, a simple and unified API for inference across many types of Intel® processors (CPU, GPU (CPUs with integrated graphics/Intel® Processor Graphics), FPGA, VPU (Intel Movidius™ Neural Compute Stick), providing easy heterogeneous processing and asynchronous execution to save developers time.

Optimized computer vision libraries for OpenCV and OpenVX and Photography Vision for CPUs and Intel® Processor Graphics.

Components to increase performance of Intel Processor Graphics for Linux*, including the Intel® Media SDK open source version and OpenCL graphics drivers and runtimes.

FPGA Runtime Environment (RTE) (from the Intel® FPGA SDK for OpenCL™) and bitstreams for Linux FPGA.

Versions for the OpenVINO toolkit has three versions:

  • Linux* (supports Ubuntu*, CentOS*, and Yocto Project*)
  • Linux for FPGA (an Intel® Arria® 10 FPGA GX development kit or Intel® Programmable Acceleration Card with Intel® Arria® 10 FPGA GX is required for deep learning acceleration)
  • Windows*

Key Capabilities

OpenVINO capabilities

Gain Significant Performance for Deep Learning Workloads

OpenVINO FPGA benchmarks 062018

1Depending on workload, quality/resolution for FP16 may be marginally impacted. A performance/quality tradeoff from FP32 to FP16 can affect accuracy; customers are encouraged to experiment to find what works best for their situation. Performance results are based on testing as of June 13, 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Configuration: Testing by Intel as of June 13, 2018. Intel® Core™ i7-6700K CPU @ 2.90GHz fixed, GPU GT2 @ 1.00GHz fixed Internal ONLY testing, Test v3.15.21 – Ubuntu* 16.04, OpenVINO 2018 RC4, Intel® Arria® 10 FPGA 1150GX. Tests were based on various parameters such as model used (these are public), batch size, and other factors. Different models can be accelerated with different Intel hardware solutions, yet use the same Intel software tools.

Intel also has development kits that work with the OpenVINO toolkit.
 

Learn More

Innovate data visualization today!

FREE Download

Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804

OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos

For more complete information about compiler optimizations, see our Optimization Notice.