Release Notes for Intel® Distribution of OpenVINO™ Toolkit 2020.3 LTS

By Andrey Zaytsev


The Intel® Distribution of OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNNs), the toolkit extends CV workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud.

The Intel Distribution of OpenVINO toolkit:

  • Enables deep learning inference from the edge to cloud.
  • Supports heterogeneous execution across Intel accelerators, using a common API for the Intel® CPU, Intel® Integrated graphics, Intel® Gaussian & Neural Accelerator (Intel® GNA), Intel® Movidius™ Neural Compute Stick (NCS), Intel® Neural Compute Stick 2 (Intel® NCS2), Intel® Vision Accelerator Design with Intel® Movidius™ Vision Processing Unit (VPU), Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA Speed Grade 2.
  • Speeds time-to-market through an easy-to-use library of CV functions and pre-optimized kernels.
  • Includes optimized calls for CV standards, including OpenCV and OpenCL™.

New and Changed in Release 2020.3 LTS

Executive Summary

  • Introducing Long-Term Support (LTS), a new release type that provides longer-term maintenance and support with a focus on stability and compatibility. Read more: Long Term Support Release
  • These release notes were introduced to support the initial version of LTS release. All updates for this release will be published on this page.
  • Intel Distribution of OpenVINO toolkit v.2020.3 LTS is based on Intel Distribution of OpenVINO toolkit v.2020.2 and includes security, functionality bug fixes, and minor capability changes.
  • Learn more about what components are included in the LTS release in the Included into this release section.
  • List of Deprecated APIAPI Changes
  • IRv7 is deprecated and support for this version may be removed as early as v.2021.1 release this year.

Model Optimizer

Model Optimizer

  • Included an upgrade notice to enable users to easily identify if a newer version of Intel Distribution of OpenVINO toolkit is available for download.

Inference Engine

Inference Engine Developer Guide

Common changes

  • Switched to the latest and official version of Threading Building Blocks 2020 Update 2. Added scalable equivalent of memory allocator that makes it possible to automatically replace all calls to standard functions for dynamic memory allocation. These changes can improve application performance and decrease application memory footprint.

CPU Plugin

CPU Plugin

  • Bug fixes:
    • 29082 Fixed possible IE pipeline crash if it is running in parallel threads
    • 28224 Fixed TF deeplab_v3 performance deviation on CPU in INT8 mode
    • 26373 Fixed TF GNMT performance drop compared to v.2019 R3
    • 25895 Fixed performance degradation for model 'googlenet-v4' IE INT8 when comparing against IE INT8 with streams
    • 29040 Fixed CAFFE yolo_v1_tiny performance deviation CPU INT8

GPU Plugin

GPU Plugin

  • Bug fixes:
    • 25657 Fixed possible memory leaks in the GPU plugin in case of multiple network loading and unloading cycles
    • 25087 Fixed performance degradations in the GPU plugin on MobileNet* models and similar models.
    • 29414 Fixed asl-recognition-0004 accuracy degradation



  • Aligned VPU firmware with Intel® Movidius™ Myriad™ X Development Kit (MDK) R11 release.
    • To rebuild firmware with MDK R11, users need to change MDK FathomKey projet makefile and add "-falign-functions=64" option to MVCCOPT variable. Other than this build option change, the 2020.3 release firmware binary is identical to the MDK R11 source. Without this build option, the fix firmware will be identical to OpenVINO 2020.2 release binary.
  • The Intel Movidius Neural Compute Stick (NCS) will be supported in this LTS release, according to LTS policy, NCS stick support will be stopped in the next release (2020.4), but will continue to be available for 2020.3 LTS release updates. 

HDDL Plugin

HDDL Plugin

  • Included security bug fixesUsers should update to this version.

FPGA Plugin

FPGA Plugin

  • Introduced support for Windows* OS platform. Intel Vision Accelerator Design with an Intel Arria 10 FPGA (Mustang-F100-A10) Speed Grade 2 and Intel® Programmable Acceleration Card (Intel® PAC) with Intel® Arria® 10 GX FPGA (Intel® PAC with Intel® Arria® 10 GX FPGA) are now supported.
  • The environment variable, CL_CONTEXT_COMPILER_MODE_INTELFPGA, is no longer required. It should not be set by the user.


  • OpenCV 4.3.0 including bug fixes.

Examples and Tutorials

  • Enable users to run end-to-end speech demo (which was previously excluded in the v.2020.2 release).

Open Model Zoo

  • Introduced a streamlined process to enable users to quantize public models to a lower precision for improved performance. Quantization of several public classification models, trained on ImageNet dataset enabled through using OMZ script. The script will call OpenVINO Post-training optimization toolkit with necessary parameters to produce quantized IR. The pre-requisite is ImageNet dataset. See more details on how to use OMZ quantizer script with OMZ documentation.

Deep Learning Streamer

  • Included security bug fixesUsers should update to this version.

Known Issues

JIRA ID Description Component Workaround
25358 Some performance degradations are possible in the GPU plugin on GT3e/GT4e/ICL NUC platforms IE GPU Plugin N/A
24709 Retrained TensorFlow* Object Detection API RFCN model has significant accuracy degradation. Only the pretrained model produces correct inference results. All Use Faster-RCNN models instead of RFCN model if retraining of a model is required.
23705 Inference may hang when running the heterogeneous plugin on GNA with fallback on CPU. IE GNA Plugin Do not use async API when using CPU/GNA heterogeneous mode.
23705 Inference may hang when running the heterogeneous plugin on GNA with fallback on CPU. IE GNA Plugin Do not use async API when using CPU/GNA heterogeneous mode.
26129 TF YOLO-v3 model fails on AI Edge Computing Board with Intel Movidius Myriad X C0 VPU, MYDX x 1  IE MyriadX plugin Use other versions of the YOLO network or USB connected device (Neural Compute Stick 2)
22108 Stopping the app during firmware boot might cause device hang for Intel Neural Compute Stick 2 (Intel NCS2) IE MyriadX plugin Do not press Ctrl+C while the device is being booted.
28747 CPU plugin does not work on Windows system with CPUs less then AVX2 instruction set (Intel Atom® processors) IE CPU Plugin Manually rebuild the CPU plugin from sources available in the public repository with CMake feature flags ENABLE_AVX2=OFF and ENABLE_AVX512=OFF.
  The nGraph Python* API has been removed from this release as it doesn't meet public release quality standards due to its incompleteness. It will be added back once it meets public release quality standards. This removal does not impact the nGraph C++ API. Users may still use the C++ API. IE Python API Use C++ API.
28970 TF faster-rcnn and faster-resnet101 topologies accuracy deviation on MYRIAD IE MyriadX plugin, IE HDDL plugin For accurate inference on these topologies either use the other HW (i.e. CPU/GPU), or use previous release of Intel Distribution of OpenVINO toolkit on Intel Neural Compute Stick 2 (Intel NCS2).
25723 TF rfcn_resnet101_coco low accuracy on dataset IE MyriadX plugin, IE HDDL plugin For accurate inference on this topology either use the other HW (i.e. CPU/GPU), or use previous release of Intel Distribution of OpenVINO toolkit on Intel Neural Compute Stick 2 (Intel NCS2).
32036 (T)Blobs shared-ptrs issues like double-free for the huge models All N/A
30569 Multiply layer with non zero offset not properly handled. IE GNA Plugin N/A
31719 Supported multiple output resulted to creating a lot of activation layers. IE GNA plugin N/A
31720 Not supported cascade concat with non functional layers between concats IE GNA plugin N/A

Included in This Release

The Intel Distribution of OpenVINO toolkit is available in these versions:

  • Intel Distribution of OpenVINO toolkit for Windows
  • [New!] Intel Distribution of OpenVINO toolkit for Windows with FPGA Support
  • Intel Distribution of OpenVINO toolkitfor Linux*
  • Intel Distribution of OpenVINO toolkit for Linux* with FPGA Support
  • Intel Distribution of OpenVINO toolkit for macOS*
Component License Location Windows Windows for FPGA Linux Linux for FPGA macOS Components coverage by LTS policy

Deep Learning Model Optimizer

Model optimization tool for your trained models.

Apache 2.0 <install_root>/deployment_tools/model_optimizer/* YES YES YES YES YES YES

Deep Learning Inference Engine

Unified API to integrate the inference with application logic

Inference Engine Headers




Apache 2.0



YES YES YES YES YES YES, except Inference Engine FPGA plugin

OpenCV library

OpenCV Community version compiled for Intel hardware

BSD <install_root>/opencv/ YES YES YES YES YES NO

Intel® Media SDK libraries (open source version)

Eases the integration between the Intel Distribution of OpenVINO toolkit and the Intel Media SDK.

MIT <install_root>/../mediasdk/* NO NO YES YES NO NO

Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver

Improves usability


<install_root>/install_dependencies/ - helps to install OpenCL Runtime, default location /usr/local/lib/

intel-opencl_*.deb - driver for Ubuntu*

intel-opencl_*.rpm - driver for CentOS*

intel-* - driver's dependencies


Intel® FPGA Deep Learning Acceleration Suite (Intel® FPGA DL Acceleration Suite), including pre-compiled bitstreams

Implementations of the most common CNN topologies to enable image classification and ease the adoption of FPGAs for AI developers.

Includes pre-compiled bitstream samples for the Intel® Programmable Acceleration Card with Intel Arria 10 GX FPGA and Intel Vision Accelerator Design with an Intel Arria 10 FPGA (Mustang-F100-A10) Speed Grade 1 and Speed Grade 2





Intel® FPGA SDK for OpenCL™ software technology

The Intel FPGA RTE for OpenCL provides utilities, host runtime libraries, drivers, and RTE-specific libraries and files





Intel Distribution of OpenVINO toolkit documentation

Developer guides and other documentation.

  Available from the Intel Distribution of OpenVINO™ toolkit product site, not part of the installer packages. NO NO NO NO NO NO

Open Model Zoo

Documentation for models; Models in binary form can be downloaded using Model Downloader 

Apache 2.0 <install_root>/deployment_tools/open_model_zoo/* YES YES YES YES YES NO

Inference Engine Samples

Samples that illustrate Inference Engine API usage and demos that demonstrate how you can use features of Intel Distribution of OpenVINO toolkit in your application

Apache 2.0 <install_root>/deployment_tools/inference_engine/samples/* YES YES YES YES YES NO

Deep Learning Workbench

Tool that can help developers to run Deep Learning models through the toolkit's Model Optimizer, convert it to INT8, fine tune it, run inference and measure accuracy.

EULA <install_root>/deployment_tools/tools/workbench/* YES YES YES NO YES YES
ngraph - open source C++ library, compiler and runtime for Deep Learning nGraph Apache 2.0 <install_root>/deployment_tools/ngraph/* YES YES YES YES NO YES

Post-Training Optimization Tool

designed to convert a model into a more hardware-friendly representation by applying specific methods that do not require re-training, for example, post-training quantization.


<install_root>/deployment_tools/tools/post_training_optimization_toolkit/* YES YES YES YES YES YES
Speech Libraries and End-to-End Speech Demos

GNA Software License Agreement

<install_root>/data_processing/audio/speech_recognition/* YES YES YES YES NO NO
DL Streamer EULA <install_root>/data_processing/dl_streamer/* NO NO YES YES NO NO

Where to Download This Release

Choose the Best Option

System Requirements

Intel CPU processors with corresponding operating systems

Intel Atom processor with Intel SSE4.1 support

Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics

6th - 10th generation Intel® Core™ processors

Intel® Xeon® processor E3, E5, and E7 family (formerly Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)

Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)

Operating Systems:

  • Ubuntu 16.04 long-term support (LTS), 64-bit
  • Ubuntu 18.04 long-term support (LTS), 64-bit
  • Windows® 10, 64-bit
  • macOS 10.14, 64-bit

Intel® Processor Graphics with corresponding operating systems (GEN Graphics)

Intel HD Graphics

Intel® UHD Graphics

Intel® Iris® pro graphics

Operating Systems:

  • Ubuntu 18.04 long-term support (LTS), 64-bit
  • Windows 10, 64-bit
  • Yocto 3.0, 64-bit

Note This installation requires drivers that are not included in the Intel Distribution of OpenVINO toolkit package.

Note A chipset that supports processor graphics is required for Intel Xeon processors. Processor graphics are not included in all processors. See Product Specifications for information about your processor.

Intel® Gaussian & Neural Accelerator (Intel® GNA)

Operating Systems:

  • Ubuntu 18.04 long-term support (LTS), 64-bit
  • Windows 10, 64-bit

FPGA processors with corresponding operating systems

Operating Systems:

  • Ubuntu 18.04 long-term support (LTS), 64-bit
  • Windows 10, 64-bit

VPU processors with corresponding operating systems

Intel Vision Accelerator Design with Intel Movidius™ Vision Processing Units (VPU) with corresponding operating systems

Operating Systems:

  • Ubuntu 18.04 long-term support (LTS), 64-bit (Linux Kernel 5.2 and below)
  • Windows 10, 64-bit
  • CentOS 7.4, 64-bit

Intel Movidius Neural Compute Stick (Intel® NCS) and Intel® Neural Compute Stick 2 (Intel® NCS2) with corresponding operating systems

Operating Systems:

  • Ubuntu 18.04 long-term support (LTS), 64-bit
  • CentOS 7.4, 64-bit
  • Windows 10, 64-bit
  • Raspbian* (target only)

AI Edge Computing Board with Intel Movidius Myriad X C0 VPU, MYDX x 1 with corresponding operating systems

Operating Systems:

  • Windows 10, 64-bit

Components Used in Validation

Operating systems used in validation:

  • Ubuntu 16.04.6 with Linux kernel 4.15
  • Ubuntu 18.04.3 with Linux kernel 5.3
  • CentOS 7.4 with Linux kernel 5.3
  • Windows 10 version 1809 (known as Redstone 5)
  • OS X* 10.14
  • Raspbian 9

DL frameworks used for validation:

  • TensorFlow 1.14.0 and 1.15.2
  • Apache MxNet* 1.5.1

Helpful Links


Featured Documentation

All Documentation, Guides, and Resources

Community Forum

Legal Information

You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at or from the OEM or retailer.

No computer system can be absolutely secure.

Intel, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos

*Other names and brands may be claimed as the property of others.

Copyright © 2020, Intel Corporation. All rights reserved.

Product and Performance Information


Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804