Optimize Networks for the Intel® Neural Compute Stick 2 (Intel® NCS 2)

By Neal P Smith, Published: 11/14/2018, Last Updated: 11/14/2018

This document pertains to the Intel® Distribution of OpenVINO™ toolkit and neural compute devices based on Intel® Movidius™ Myriad™ X such as the Intel® Neural Compute Stick 2 (Intel®  NCS 2).

Overview

The Neural Compute Engine (NCE) is an on-chip hardware block available in neural compute devices based on Intel® Movidius™ Myriad™ X. It is designed to run deep neural networks in hardware at much higher speeds than possible with previous generations of the Myriad VPU, still with low power and without compromising accuracy. With two NCEs, the Intel® Movidius™ Myriad™ X architecture is capable of 1 TOPS (1 trillion operations per second) of compute performance on deep neural network inferences.

The model optimizer in OpenVINO™ tookit automatically optimizes networks such that the device can process appropriate layers to take advantage of the NCEs onboard.

Supported Hardware Features:

Networks utilizing the following supported features can be compiled to run as hardware networks on the NCEs. If your network has other non-hardware features, it can still partially run in hardware on the NCE.

  • Multichannel convolution
    •  Matrix-Matrix Multiply/Accumulate
    •  Optional non-overlapping Max and Avg pooling
  • Pooling
    • Overlapping Max and Avg pooling
  • Fully connected
    • Vector-Matrix Multiplay/Accumulate
  • Post processing
    • Bias, Scale, Relu-x, PRelu

Supported Hardware Networks

To see the list of networks that validated to compile and run as hardware networks in this release please refer to the Release Notes.

Using the Intel® Distribution of OpenVINO™ toolkit Inference Engine API with Hardware Networks

No application changes are required to use OpenVINO™ toolkit with hardware networks.

Hardware acceleration with network configuration, HW_STAGES_OPTIMIZATION, is on by default. This can be turned off or back on. The Inference Engine supports different layers for different hardware targets. For a list of supported devices and layers, refer to the Inference Engine Guide.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804