Deep Neural Network Functions

Note

The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

Intel® Math Kernel Library (Intel® MKL) functions for Deep Neural Networks (DNN functions) is a collection of performance primitives for Deep Neural Networks (DNN) applications optimized for Intel® architecture. The implementation of DNN functions includes a set of primitives necessary to accelerate popular image recognition topologies, such as AlexNet, Visual Geometry Group (VGG), GoogleNet, and Residual Networks (ResNet).

The primitives implement forward and backward passes for the following operations:

  • Convolution: direct batched convolution

  • Inner product

  • Pooling: maximum, minimum, and average

  • Normalization: local response normalization across channels (LRN) and batch normalization

  • Activation: rectified linear neuron activation (ReLU)

  • Data manipulation: multi-dimensional transposition (conversion), split, concat, sum, and scale

Intel MKL DNN primitives implement a plain C application programming interface (API) that can be used in the existing C/C++ DNN frameworks, as well as in custom DNN applications.

In addition to input and output arrays of DNN applications, the DNN primitives work with special opaque data types to represent the following:

  • DNN operations.

    This data type specifies the operation type (for example: convolution forward propagation, convolution backward filter propagation, and so on) and parameters (for example: the filter size for convolution, alpha and beta for normalization, and so on)

  • Layouts of processed data.

    This data type specifies relative location of elements of processed arrays in memory.

Input and output arrays of DNN operations are called resources. Each DNN operation requires that the resources have certain data layouts. The application can query DNN operations about the required data layouts and check whether the layouts of the resources are really the required layouts.

An application that calls Intel MKL DNN functions should involve the following stages:

  1. Setup

    Given a DNN topology, the application creates all DNN operations necessary to implement scoring, training, or other application-specific computations. To pass data from one DNN operation to the next one, some applications create intermediate conversions and allocate temporary arrays if the appropriate output and input data layouts do not match.

  2. Execution

    This stage consists of calls to the DNN primitives that apply the DNN operations, including necessary conversions, to the input, output, and temporary arrays.

This section describes Intel MKL DNN functions, enumerated types used, as well as array layouts and attributes required to perform DNN operations.

The following table lists Intel MKL DNN functions grouped according to their purpose.

Intel MKL DNN Functions

Function Name

Description

Handling Array Layouts

dnnLayoutCreate

Creates a plain layout. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnLayoutCreateFromPrimitive

Creates a custom layout. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnLayoutGetMemorySize

Returns the size of the array specified by a layout. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnLayoutSerializationBufferSize

Returns the size required for layout serialization. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnLayoutSerialize

Serializes a layout to a buffer. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnLayoutDeserialize

Deserializes a layout from a buffer. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnLayoutCompare

Checks whether layouts are equal. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnLayoutDelete

Deletes a layout. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

Handling Attributes of DNN Operations

dnnPrimitiveAttributesCreate

Creates an attribute container. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnPrimitiveAttributesDestroy

Destroys an attribute container. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnPrimitiveGetAttributes

Returns the container with attributes set for an instance of a primitive. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

DNN Operations

dnnConvolutionCreate, dnnGroupsConvolutionCreate

Creates propagation operations for convolution layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnInnerProductCreate

Creates propagation operations for inner product layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnReLUCreate

Creates propagation operations for rectified linear neuron activation layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnLRNCreate

Creates propagation operations for layers performing local response normalization across channels. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnPoolingCreate

Creates propagation operations for pooling layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnBatchNormalizationCreate

Creates propagation operations for batch normalization layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnBatchNormalizationCreate_v2

Creates propagation operations for batch normalization performed using the specified method. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnSplitCreate

Creates split layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.
dnnConcatCreate Creates concatenation layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.
dnnSumCreate Creates sum layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.
dnnScaleCreate Creates scale layers. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnConversionCreate

Creates conversion operations. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnExecute

Performs DNN operations. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnConversionExecute

Performs a conversion operation. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnDelete

Deletes descriptions of DNN operations. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnAllocateBuffer

Allocates an array with a given layout. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

dnnReleaseBuffer

Releases an array allocated by dnnAllocateBuffer. Note: The Deep Neural Network (DNN) component in Intel MKL is deprecated and will be removed in a future release. You can continue to use optimized functions for deep neural networks through Intel Math Kernel Library for Deep Neural Networks.

Optimization Notice

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804

For more complete information about compiler optimizations, see our Optimization Notice.
Select sticky button color: 
Orange (only for download buttons)