Accelerate Deep Learning Development At The Edge

Develop for High-Performance, Low-Power Devices

Buy Now  Get Started

Intel® Movidius™ Neural Compute Stick

What is the Intel® Movidius™ Neural Compute Stick?

This tiny, fanless, deep learning device allows you to learn AI programming at the edge. It is powered by the same high-performance Intel® Movidius™ Vision Processing Unit (VPU) that can be found in millions of smart security cameras, gesture controlled drones, industrial machine vision equipment, and more.

A developer using the Intel Movidius Neural Compute Stick

Learn What You Can Do with a Neural Compute Stick

Enable rapid prototyping, validation, and deployment of deep neural network (DNN) inference applications at the edge. The low-power vision processing unit (VPU) architecture enables an entirely new segment of AI applications that are not reliant on a connection to the cloud.

Combined with the Intel® Movidius™ Software Development Kit (SDK), it allows deep learning developers to profile, tune, and deploy convolutional neural networks (CNN) on low-power applications that require real-time inferencing.

Deep Learning with Intel: Siraj Raval

In this video, Siraj discusses how the Intel® Movidius™ Neural Compute Stick works. He demonstrates image classification in Python* using the miniature deep learning hardware development platform.

Technical Specifications

Intel Movidius Neural Compute Stick
  • Processor: Intel® Movidius™ Myriad™ 2 Vision Processing Unit (VPU)
  • Supported frameworks: TensorFlow*, Caffe*
  • Connectivity: USB 3.0 Type-A
  • USB stick dimensions: 2.85 in. x 1.06 in. x 0.55 in. (72.5 mm x 27 mm x 14 mm)
  • Operating temperature: 0° C to 40° C
  • Minimum system requirements:
    • x86_64 computer running Ubuntu* 16.04 or Raspberry Pi* 3 Model B running Stretch desktop or Ubuntu 16.04 virtual box instance
    • USB 2.0 Type-A port (USB 3.0 recommended)
    • 1 GB RAM
    • 4 GB free storage space

Additional Software Tools To Speed Up Your Development

example of license plate recognition

OpenVINO™ Toolkit

This toolkit is designed to expedite development of high-performance computer vision solutions and deliver fast, efficient deep learning workloads across Intel® platforms.

Android* Neural Networks API (NNAPI)

This new Android* C API for mobile devices supports the ability to run operations that are computationally intensive. Most machine learning frameworks, such as TensorFlow and Caffe, require this ability in order to build and train neural networks on Android platforms.

Neural Network Optimization Explained

Learn how to use the Intel® Distribution for OpenVINO™ toolkit to develop computer vision applications and convolutional neural networks across Intel® platforms.