37 Search Results

Refine by

    Results for:

Inference Engine Developer Guide

Deploying deep learning networks from the training environment to embedded platforms for inference is a complex task. The Inference Engine deployment process converts a trained model to an Intermediate Representation.

Inference Engine Samples

Image Classification Sample Description

This topic demonstrates how to run the Image Classification sample application, which does inference using image classification networks like AlexNet* and GoogLeNet*.

How It Works

Upon the start-...

Model Optimizer Developer Guide

Introduction

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-...

Release Notes for Intel® Distribution of OpenVINO™ toolkit

OpenVINO™ 2018 R3 Release - Gold release of the Intel® FPGA Deep Learning Acceleration Suite accelerates AI inferencing workloads using Intel® FPGAs that are optimized for performance, power, and cost, Windows* support for the Intel® Movidius™ Neural Compute Stick, Python* API preview that supports...

Using the Model Optimizer to Convert MXNet* Models

Introduction

The Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end...

Using the Model Optimizer to Convert Caffe* Models

Introduction

The Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end...

Using the Model Optimizer to Convert TensorFlow* Models

Introduction

The Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end...

Install the Intel® Distribution of OpenVINO™ toolkit for Linux with FPGA Support

NOTES:
- The Intel® Distribution of OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK.
- These steps apply to Ubuntu, CentOS*, and Yocto*. If you are using Intel® Distribution of...

Intel Vision Accelerator Design with an Intel Arria 10 FPGA Installation Guide

Last updated: December 26, 2018

This document introduces the functions and features of the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA, and then provides instructions to use the sample applications included with this vision accelerator.

This document...

Install Intel® Distribution of OpenVINO™ toolkit for Windows* 10

This guide applies to Microsoft Windows* 10 64-bit. For Linux* OS information and instructions, see the Installation Guide for Linux. 

Introduction

Important:
- All steps in this guide...

Install the Intel® Distribution of OpenVINO™ toolkit for Linux*

These steps apply to Ubuntu*, CentOS*, and Yocto* and include the following components: Model Optimizer, Inference Engine, Drivers and runtimes for OpenCL™ version 2.1, Intel® Media SDK, OpenCV* version 3.4.2, OpenVX* version 1.1, Pre-trained models, and Sample Applications.

Color Copy OpenVX* Sample

Last updated: November 28, 2018

This sample shows the implementation of a "Color Copy" pipeline (specific to the Printing and Imaging domain) using OpenVX*. The “Color Copy” pipeline is a workload that would typically be used within a multi-function printer (...

Pages