Improve Deep Learning Performance, Enable Inferences on FPGAs with Intel® Computer Vision SDK Beta R3

Intel Computer Vision SDK usages

The R3 Beta Release Provides New Deep Learning Capabilities, Frameworks Support, & Performance Improvements

Software developers and data scientists working on computer vision, neural network inference, and deep learning deployment capabilities for smart cameras, robotics, office automation, and autonomous vehicles can accelerate their solutions across multiple types of platforms: CPU, GPU, and now FPGA. The new Intel® Computer Vision SDK Beta R3 (Intel® CV SDK) delivers support on select Intel® Arria® FPGA platforms. This latest toolkit also improves other deep learning and traditional computer vision capabilities; expands support for custom layers, fp16, and topology level tuning in Caffe* framework models; and adds technical preview for importing TensorFlow* and MXNet* framework models. 

Download Now

Get more details about new and enhanced features below.

 

Introducing FPGA SupportIntel Arria 10

The Intel® CV SDK Beta R3 release now supports Convolutional Neural Network (CNN) workload acceleration on target systems with an Intel® Arria® FPGA 10 GX Development Kit, where using the SDK's Deep Learning Deployment Toolkit and OpenVX™ delivers inferencing on FPGAs.

A typical computer vision pipeline with deep learning application may consist of vision functions (vision nodes) and CNN nodes. The Intel CV SDK software package includes the Model Optimizer utility, which can accept pre-trained models from popular deep learning frameworks such as Caffe, TensorFlow and MxNet. The software generates Inference Engine-based CNN nodes in C-codes.

Developers can combine the Inference Engine-based CNN nodes with other vision functions to form a full computer vision pipeline application. The CNN nodes are accelerated in the FPGA add-on card, while the rest of the vision pipelines are executed on the host Intel® architecture processor.
 

Computer Vision Pipeline Application


To learn more about new features that include enhanced deep learning, supported topologies, and improved performance, see the Intel® Arria® FPGA Support Guide. For deeper technical details and access to collateral, contact your Intel representative or send us an email.

Share Your Insight on FPGA Topologies Needed

Intel is interested in connecting with customers using new FPGA features, and about which FPGA topologies are most needed. Customers are asked to connect with Intel at our public Computer Vision SDK community forum, or by email.

 

Optimize Deep Learning

The Intel CV SDK Beta R3, which contains the Deep Learning Deployment Toolkit, also provides new capabilities and additional framework models support so users have more usage opportunities. It:

  • Supports custom layers, fp16 and topology level tuning for Caffe framework models. This means that this optimized framework has near universal single and object recognition in the Inference Engine for high performance and portability across multiple types of Intel platforms. 
  • Technical preview: Adds new import capabilities for TensorFlow and MXNet framework models into the Deep Learning Deployment Toolkit Inference Engine. More details can be found in documentation.
  • Adds new capabilities and code samples for Neural Style Transfer and Semantic Segmentation topologies.

Deep Learning enhancements include functions running on an Intel GPU, along with strong performance improvements:

  • Provides an auto-tuning mechanism for choosing the best kernel/primitive implementation for a given Intel GPU.
  • Delivers performance improvements of up to 60+ percent1 for select topologies (PVANET, Resnet50, Googlenetv3, b32) and batch sizes (SSD_VGG batch 1) with new primitives. (Source: Intel Corporation.)1

 

Traditional Computer Vision Enhancements

  • Delivers enhanced memory footprint performance for OpenVX pipelines. 
  • Supports Khronos OpenVX Neural Networks Extension 1.2 and is compatible with Ubuntu*, CentOS* and Yocto* OSes when deployed on an Intel CPU. 
  • Create OpenVX applications easier through using a new Eclipse* plugin for a fully integrated development system. Create a new OpenVX project, add graphs and edit with graph designer, generate code automatically when modifying graphs, and profile and debug graphs.

Download the new Intel CV SDK Beta R3 now.

Resources

 

1Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark & MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Benchmark Source: Intel Corporation. Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User & Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804

OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.

For more complete information about compiler optimizations, see our Optimization Notice.