Overview | Intel® Computer Vision SDK Beta

Intel® Computer Vision SDK Beta R2

Accelerate your computer vision solutions

  • Easily harness the performance of computer vision accelerators from Intel
  • Quickly deploy computer vision algorithms with deep learning support using the included Deep Learning (DL) Deployment Toolkit Beta (also available in a stand-alone version)
  • Add your own custom kernels into your workload pipeline

What is the Intel® Computer Vision SDK Beta R2?

This comprehensive toolkit can be used for developing and deploying computer vision solutions on Intel® platforms, including autonomous vehicles, digital surveillance cameras, robotics, and mixed-reality headsets.

To learn more about new features that include enhanced deep learning, additional operating systems, and improved performance, see Beta R2 Overview.

What's Inside

Intel® CV SDK features three technologies for developing and deploying vision-oriented solutions:

Deep Learning Inference Engine

  • C++ library for inference acceleration.
  • Out-of-the-box solution for deep learning performance with the highest level of abstraction.

A well-known, open-source computer vision and machine learning library.

A low-level, graph-based API with heterogeneity support —optimized for Intel® technology.

If the functionality you need is not already available in the supplied library, you can create custom kernels using C, C++, or OpenCL™ kernels. Custom kernels can be easily incorporated into your overall workload graph using the included Vision Algorithm Designer that allows developers to:

  • Create workloads using drag-and-drop dataflow graphs that combine supplied and custom kernels
  • Export code for integration into complete, functional applications

Sample workloads are provided to help you quickly get started creating your own computer vision solutions. These samples demonstrate how to use the Intel® Computer Vision SDK (Intel® CV SDK) in common scenarios, such as working in a heterogeneous compute environment and processing video streams, to more domain-specific workloads, such as those found in automotive applications.

Integrate Deep Learning Capabilities

With Intel CV SDK features and support, deploy efficiently trained deep learning networks using the Deep Learning Deployment Toolkit through:

  • High-level C++ inference engine APIs
  • Convolutional neural networks (CNN) integrated into an OpenVX* pipeline

The deployment process uses two components:

  1. Model optimizer
    • What it is: A command-line tool that imports trained models from popular deep learning frameworks. Performs static model analysis and adjustments for optimal execution on end-point target devices and serializes the adjusted model into Intel's intermediate representation (IR) file.
    • What it does: Converts your trained model (on TensorFlow* or Caffe*), then optimizes it offline into a framework-independent model (IR file).
  2. Inference engine
    • What it is: Optimized inference execution engine that delivers small-footprint inference solutions on embedded inference platforms. Enables seamless integration with application logic and eases transition between Intel® platforms through supporting the same API across a variety of platforms.
    • What it does: Consumes the IR file and provides an optimized C++ execution object, enabling the best application inference.

To integrate CNNs into your OpenVX compute pipeline, the model optimizer generates code snippets that are building the CNN graph as an OpenVX graph, which you can integrate into your OpenVX application.

Advantages Over Traditional Programming Approaches

The Intel CV SDK is designed to make the heterogeneous capabilities of processors from Intel more accessible, thereby allowing developers to deliver the hardware's full potential. Its OpenVX approach includes setting up function pipelines as directed acyclic graphs that allow additional optimization possibilities by fully describing the data flow through a set of algorithms. Approaches like automatic tiling are possible, which can allow multiple steps of an algorithm to work on the same local data without rewriting each function.

Pre-optimized functions for CPU and GPU are available in the SDK, so you can find building blocks to minimize the amount of custom code you need to write and maintain. Implementations of frequently used functions for multiple hardware types with the same interface means that you can easily move work to different compute engines within a processor—even at runtime—without having to recompile.

The Intel CV SDK also includes the ability to extend (create or customize) kernels. These kernels can wrap lower-level heterogeneous approaches like OpenCL to access multiple hardware types.

Technical Specs

Development System Platform OS Target System Platform OS
6th Generation Intel® Core™ processor with Intel® Iris® Pro graphics and Intel® HD Graphics
  • Ubuntu* 16.04.2 long-term support (LTS), 64 bit
  • CentOS* 7.2, 64 bit
  • (New) Windows® 10, 64 bit
6th Generation Intel® Core™ processor with Intel® Iris® Pro graphics and Intel® HD Graphics
  • Ubuntu* 16.04.2 long-term support (LTS), 64 bit
  • CentOS* 7.2, 64 bit
  • (New) Windows® 10, 64 bit
    Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics Yocto Project* MR3, 64 bit

OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos