Deep Neural Networks
Performance & Network Primitives
Use this collection of performance primitives for deep-learning applications that are optimized for Intel® architecture. The library supports the most commonly used primitives for accelerating image recognition topologies, including AlexNet, Visual Geometry Group, GoogLeNet, and ResNet*. The primitives include convolution, inner product, pooling, normalization, and activation primitives with support for forward (scoring or inference) and backward (gradient propagation) operations. Typically, these functions are used to accelerate the compute-intensive parts of popular deep-learning frameworks, which include Caffe*, Theano*, and Torch*.
In addition to being included in Intel® Math Kernel Library (Intel® MKL), an open-source version of the deep neural network (DNN) primitives is available: Intel® Math Kernel Library for Deep Neural Networks (Intel® MLK-DNN). Implement DNN primitive code changes first in the open-source project then, once it is stable, add them to the product version.