35 Search Results

Refine by

    Results for:

Release Notes for Intel® Distribution of OpenVINO™ toolkit

OpenVINO™ 2018 R3 Release - Gold release of the Intel® FPGA Deep Learning Acceleration Suite accelerates AI inferencing workloads using Intel® FPGAs that are optimized for performance, power, and cost, Windows* support for the Intel® Movidius™ Neural Compute Stick, Python* API preview that supports...

shows place to return to the install guide

Install the Intel® Distribution of OpenVINO™ toolkit with FPGA Support

NOTES:
- The Intel® Distribution of OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK.
- These steps apply to Ubuntu*, CentOS*, and Yocto*. If you are using Intel® Distribution of...

Model Optimizer Developer Guide

Introduction

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-...

Using the Model Optimizer to Convert TensorFlow* Models

Introduction

The Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end...

Using the Model Optimizer to Convert Caffe* Models

Introduction

The Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end...

Using the Model Optimizer to Convert MXNet* Models

Introduction

The Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end...

Install the Intel® Distribution of OpenVINO™ toolkit for Linux*

These steps apply to Ubuntu*, CentOS*, and Yocto* and include the following components: Model Optimizer, Inference Engine, Drivers and runtimes for OpenCL™ version 2.1, Intel® Media SDK, OpenCV* version 3.4.2, OpenVX* version 1.1, Pre-trained models, and Sample Applications.

OpenVINO for Windows Installation screen

Intel® Distribution of OpenVINO™ toolkit for Windows* 10

This guide applies to Microsoft Windows* 10 64-bit. For Linux* OS information and instructions, see the Installation Guide for Linux. 

Introduction

Important:
- All steps in this guide...

Inference Engine Samples

Image Classification SampleDescription

This topic demonstrates how to build and run the Image Classification sample application, which does inference using image classification networks like AlexNet* and GoogLeNet*.

How it works

Upon the...

Inference Engine Developer Guide

Deploying deep learning networks from the training environment to embedded platforms for inference is a complex task. The Inference Engine deployment process converts a trained model to an Intermediate Representation.

Legacy OpenCL™ Runtimes for Intel® Processors

Please see the new portal for OpenCL™ deployments prior to accessing this legacy content.

What to Download

Installation has two parts:

Intel® SDK for OpenCL™ Applications Package Driver and library(runtime) packages...

6th Generation Intel® Core™ Processor Example

OpenCL™ Runtimes for Intel® Processors

Deploy OpenCL™ Runtimes

Obtain runtimes to execute OpenCL™ applications on Intel­® Processors

Intel® Graphics Technology (Intel® GEN Compute Architectures only) Intel® Xeon® Processor or Intel® Core™...

Pages