Intel® Computer Vision SDK Beta R3

Accelerate your computer vision solutions

  • Easily harness the performance of computer vision accelerators from Intel
  • Quickly deploy computer vision algorithms with deep learning support using the included Deep Learning (DL) Deployment Toolkit Beta (also available in a stand-alone version)
  • Add your own custom kernels into your workload pipeline

What is the Intel® Computer Vision SDK Beta R3?

This comprehensive toolkit can be used for developing and deploying computer vision solutions on Intel® platforms, including autonomous vehicles, digital surveillance cameras, robotics, and mixed-reality headsets.

To learn more about new features that include enhanced deep learning, additional operating systems, and improved performance, see Beta R3 Overview.

What's Inside

Integrate Deep Learning Capabilities

With Intel® Computer Vision SDK (Intel® CV SDK) features and support, deploy efficiently trained deep learning networks using the Deep Learning Deployment Toolkit through high-level C++ Inference Engine APIs.

 

The deployment process uses two components:

  1. Model optimizer
    • What it is: A command-line tool that imports trained models from popular deep learning frameworks. Performs static model analysis and adjustments for optimal execution on end-point target devices and serializes the adjusted model into Intel's intermediate representation (IR) file.
    • What it does: Performs static, trained model (from MXNet, TensorFlow*, or Caffe*) analysis and adjustments for optimal execution on end-point target devices and serializes the adjusted model into Intel's intermediate representation (IR) and binary (weights) files.
  2. Inference engine
    • What it is: An execution engine that delivers inference solutions on embedded platforms. It provides an embedded-friendly scoring solution that optimizes execution (computational graph analysis, scheduling, model compression) for target hardware.
    • What it does: Consumes the IR file and provides an optimized C++ execution object, enabling the best application inference.

Alternatively, it is possible to integrate Convolutional Neural Networks (CNN) into an OpenVX pipeline. The Model Optimizer generates code snippets that can help build a CNN graph as an OpenVX graph, which you can integrate into your OpenVX application.

Advantages Over Traditional Programming Approaches

The Intel CV SDK is designed to make the heterogeneous capabilities of Intel® processors more accessible, thereby allowing developers to deliver the hardware's full potential. For example, the Inference Engine builds function pipelines as directed acyclic graphs that allow additional optimization possibilities (like automatic fusion). In turn, this will allow multiple steps of an algorithm to work on the same local data without rewriting each function.

Pre-optimized functions for CPU, GPU, and now FPGA are available in the SDK, so you can find building blocks to minimize the amount of custom code you need to write and maintain. Implementations of frequently used functions for multiple hardware types with the same interface means that you can easily move work to different compute engines within a processor—even at runtime—without having to recompile.

The Intel CV SDK also includes the ability to extend (create or customize) kernels. These kernels can wrap lower-level heterogeneous approaches like OpenCL to access multiple hardware types.

Introducing FPGA Support

The Intel CV SDK Beta enables Convolutional Neural Network (CNN) workload acceleration on platforms with the Intel® Arria® FPGA. The Deep Learning Deployment Toolkit and OpenVX functions are extended to now deliver inferencing on the FPGA. To learn more about new features that include enhanced deep learning, supported topologies, and improved performance, see the Intel® Arria® FPGA Support Guide. For deeper technical details and access to collateral, contact your Intel representative or send us an email.

Technical Specs

Development System Platform Operating System Target System Platform Operating System
6th Generation Intel® Core™ processor with Intel® Iris® Pro graphics and Intel® HD Graphics
  • Ubuntu* 16.04.2 long-term support (LTS), 64 bit
  • CentOS* 7.3, 64 bit
  • Windows® 10, 64 bit
6th Generation Intel® Core™ processor with Intel® Iris® Pro graphics and Intel® HD Graphics

Intel® Arria® FPGA 10 GX Development Kit
  • Ubuntu* 16.04.2 long-term support (LTS), 64 bit
  • CentOS* 7.3, 64 bit
  • Windows® 10, 64 bit (core only)
    Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics Yocto Project* MR3, 64 bit

OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos