Intel® Computer Vision SDK Beta R3

Accelerate your computer vision solutions

  • Easily harness the performance of computer vision accelerators from Intel
  • Quickly deploy computer vision algorithms with deep learning support using the included Deep Learning (DL) Deployment Toolkit Beta (also available in a stand-alone version)
  • Add your own custom kernels into your workload pipeline

How to Choose the Right Tool

OpenCV* is easy-to-use and to get started with general computer vision-related flows. It has a large number of primitives, which are easy to customize and provide a reasonable performance baseline.

If further acceleration is needed, choose an API to use by the critical (performance) path needed:

  • For inference acceleration, use the Deep Learning Inference Engine, which will provide the best CNN primitives coverage and performance on Intel® platforms.
  • If your pipeline features classic computer vision, consider using OpenVX*, which provides execution scheduling, heterogeneous performance, and other low-level optimization capabilities for image processing.

Accelerate and Deploy Convolutional Neural Networks (CNN)

In addition to being computationally intensive, deploying CNN can be a complex task. Using the Deep Learning Inference Engine is the fastest way to accelerate CNN on Intel platforms. The SDK offers dedicated tools for automatically and seamlessly taking trained models and optimizing them for a target configuration. The tools support popular frameworks, such as Caffe*, TensorFlow*, and now MXNet* making integration into your existing tool flow easier for image classification, image segmentation, and object detection.

You can also create custom kernels that integrate into your workload graphs using C, C++ (for a CPU), and OpenCL™ (for GPU-assisted execution).

Introducing FPGA Support for CNN

The Intel® Computer Vision SDK (Intel® CV SDK) Beta supports CNN workload acceleration on the Intel® Arria® FPGA. Specific CNN nodes can be accelerated on the FPGA add-on card, while the rest of the vision pipelines are executed on the Intel®-based host processor.

Standard topologies are supported and the list is growing to keep pace with industry developments and trends. For a detailed list of topologies, contact your Intel representative or send us an email.

Sample Workloads

Get started by using inference samples for:

  • Classification
  • Object detection
  • Segmentation

Advanced topics include Inference Engine extensibility examples, handling multiple outputs from the network, and more.

Try OpenVX* for Accelerating Classic Computer Vision

This is a new standard API for production deployment of accelerated computer vision applications. It offers a higher level of abstraction, which allows portable, cross-platform performance by expressing workloads as connected data flow graphs.

You can also create custom kernels that integrate into your workload graphs using C, C++ (for a CPU), and OpenCL (for GPU-assisted execution). Computer vision tasks are naturally expressed as graphs because they consist of a sequence of operations that repeatedly process data from an input stream. OpenVX optimizes the graph by executing different workflow stages on the best nodes in the heterogeneous platform. Optimizations within the OpenVX runtime environment provide enhanced scheduling by using Intel® Threading Building Blocks to increase heterogeneous performance (see Intel Threading Building Blocks).

For more information, see OpenVX from Khronos* Group. Refer to the Sample Workloads page for code examples.

Optimized Computer Vision Kernels for OpenVX

When building a workload graph for classic computer vision, choose from a variety of optimized building blocks, including built-in kernels from OpenVX and numerous extensions from Intel.

Vision Algorithm Designer

Included as part of Intel CV SDK, this tool provides easier development of graph-based computer vision algorithms.

  • Create OpenVX graphs with an intuitive user interface
  • Generate OpenVX code for graphs that streamline integration into existing code
  • Develop using the Eclipse* plug-in for a fully integrated OpenVX development system
  • Includes app tracing, debugging, performance profiling and analyzing, and visualizing capabilities

Additional Tools

Intel® Media Server Studio
Create media applications and solutions for the data center, cloud, and networks.

Intel® Media SDK
Deliver fast video and image processing for IoT, embedded, mobile, and client applications and devices.

Intel® SDK for OpenCL™ Applications
A comprehensive OpenCL development environment for platforms from Intel.