1. Robots and ASTRO
To help grow deep learning inference applications at the edge, Intel developed the energy efficient and low cost Intel® Movidius™ Neural Compute Stick - a tiny fanless deep learning device powered by the Intel® Movidius™ Vision Processing Unit (VPU).
This article provides guidance for transitioning from the NCSDK to the Intel® Distribution of OpenVINO™ toolkit.
The TensorFlow* image classification sample codes below describe a step-by-step approach to modify the code in order to scale the deep learning training across multiple nodes of HPC data centers.
Learn how to deploy a computer vision application on a CPU, and then accelerate the deep learning inference on the FPGA.
This article will describe performance considerations for CPU inference using Intel® Optimization for TensorFlow*