1. Robots and ASTRO
Inference of Caffe* and TensorFlow* Trained Models with Intel’s Deep Learning Deployment Toolkit Beta 2017R3Installing Deployment Toolkit
Learn how to run computer vision inference faster on Intel Architecture using the Intel® Computer Vision SDK Beta R3. This tutorial will walk you through the process of generating the files needed for the Inference Engine from a Caffe model, and how to run the Inference Engine in a C++ application. The source code for this tutorial is available on GitHub.
Release Notes of Intel® Media SDK include important information, such as system requirements, what's new, feature table and known issues since the previous release.
This paper introduces Intel® software tools recently made available to accelerate deep learning inference in edge devices (such as smart cameras, robotics, autonomous vehicles, etc.) incorporating Intel® Processor Graphics solutions across the spectrum of Intel SOCs.
Learn core concepts of developing OpenCL™ applications with Intel® SDK for OpenCL™ Applications 2019.
This article provides guidance for transitioning from the NCSDK to the Intel® Distribution of OpenVINO™ toolkit.