This document pertains to the Intel® Distribution of OpenVINO™ toolkit and neural compute devices based on Intel® Movidius™ Myriad™ X such as the Intel® Neural Compute Stick 2 (Intel® NCS 2).
The Neural Compute Engine (NCE) is an on-chip hardware block available in neural compute devices based on Intel® Movidius™ Myriad™ X. It is designed to run deep neural networks in hardware at much higher speeds than possible with previous generations of the Myriad VPU, still with low power and without compromising accuracy. With two NCEs, the Intel® Movidius™ Myriad™ X architecture is capable of 1 TOPS (1 trillion operations per second) of compute performance on deep neural network inferences.
The model optimizer in OpenVINO™ tookit automatically optimizes networks such that the device can process appropriate layers to take advantage of the NCEs onboard.
Networks utilizing the following supported features can be compiled to run as hardware networks on the NCEs. If your network has other non-hardware features, it can still partially run in hardware on the NCE.
To see the list of networks that validated to compile and run as hardware networks in this release please refer to the Release Notes.
No application changes are required to use OpenVINO™ toolkit with hardware networks.
Hardware acceleration with network configuration, HW_STAGES_OPTIMIZATION, is on by default. This can be turned off or back on. The Inference Engine supports different layers for different hardware targets. For a list of supported devices and layers, refer to the Inference Engine Guide.