36 Search Results

Refine by

    Results for:

Inference Engine Developer Guide

Deploying deep learning networks from the training environment to embedded platforms for inference is a complex task. The Inference Engine deployment process converts a trained model to an Intermediate Representation.

Installing the OpenVINO™ Toolkit for Linux*

These steps apply to Ubuntu*, CentOS*, and Yocto* and include the following components: Model Optimizer, Inference Engine, Drivers and runtimes for OpenCL™ version 2.1, Intel® Media SDK, OpenCV* version 3.4.2, OpenVX* version 1.1, Pre-trained models, and Sample Applications.

TEST VERSION Installing the OpenVINO™ Toolkit for Linux*

These steps apply to Ubuntu*, CentOS*, and Yocto* and include the following components: Model Optimizer, Inference Engine, Drivers and runtimes for OpenCL™ version 2.1, Intel® Media SDK, OpenCV* version 3.4.2, OpenVX* version 1.1, Pre-trained models, and Sample Applications.

Installing the OpenVINO™ Toolkit for Linux* with FPGA Support

NOTE: The OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK.
These steps apply to Ubuntu*, CentOS*, and Yocto*. If you are using OpenVINO™ Toolkit on Windows, see the Installation Guide...

OpenVINO™ Toolkit Release Notes

OpenVINO™ 2018 R3 Release - Gold release of the Intel® FPGA Deep Learning Acceleration Suite accelerates AI inferencing workloads using Intel® FPGAs that are optimized for performance, power, and cost, Windows* support for the Intel® Movidius™ Neural Compute Stick, Python* API preview that supports...

Inference Engine Samples

Image Classification Sample Description

The Image Classification sample application does inference using image classification networks, like AlexNet* and GoogLeNet*. The sample application reads command line parameters and loads a network and...

Model Optimizer Developer Guide

Introduction

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-...

Accelerate Computer Vision and Deep Learning from Edge to Cloud

Last updated: July 24, 2018

OpenVINO™ open visual inference and neural network optimization toolkit is free software that helps developers and data scientists speed up computer vision workloads, streamline deep learning inference and deployments, and enable easy, heterogeneous execution across Intel® platforms from edge to cloud.

6th Generation Intel® Core™ Processor Example

OpenCL™ Runtimes for Intel® Processors

What is your goal with OpenCL™ applications?

Deploy Develop

View Legacy OpenCL™ Deployment Webpage

View Intel® FPGA enabling software

...

Installing the OpenVINO™ Toolkit for Windows* 10

NOTE: The OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK.
These steps apply to Windows* 10. For Linux* instructions, see the Linux installation guide

Introduction

The OpenVINO...

Legacy OpenCL™ Runtimes for Intel® Processors

Please see the new portal for OpenCL™ deployments prior to accessing this legacy content.

What to Download

Installation has two parts:

Intel® SDK for OpenCL™ Applications Package Driver and library(runtime) packages...

TEST VERSION Installing the OpenVINO™ Toolkit for Windows* 10

This guide includes information related to Microsoft Windows* 10 64-bit. See the Linux installation guide for Linux information and instructions

Introduction

Important:
- All steps in...

Pages