Deep Neural Networks

Performance & Network Primitives

Use this collection of performance primitives for deep-learning applications that are optimized for Intel® architecture. The library supports the most commonly used primitives for accelerating image recognition topologies, including AlexNet, Visual Geometry Group, GoogLeNet, and ResNet*. The primitives include convolution, inner product, pooling, normalization, and activation primitives with support for forward (scoring or inference) and backward (gradient propagation) operations. Typically, these functions are used to accelerate the compute-intensive parts of popular deep-learning frameworks, which include Caffe*, Theano*, and Torch*.

In addition to being included in Intel® Math Kernel Library (Intel® MKL), an open-source version of the deep neural network (DNN) primitives is available: Intel® Math Kernel Library for Deep Neural Networks (Intel® MLK-DNN). Implement DNN primitive code changes first in the open-source project then, once it is stable, add them to the product version.

Download on GitHub*


 

See below for further notes and disclaimers.1

 

Training Performance Benchmark

View an improved performance trend over time as a result of using Intel MKL, helping you make informed decisions about which functions to use in your applications.

Additional Features

 

1Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information, visit www.intel.com/benchmarks.

Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804