Types of Layers Implemented
Intel DAAL provides the following types of layers:
which compute the inner product of all weighed inputs plus bias.
which apply a transform to the input data.
Absolute Value (Abs) Layers
Parametric Rectifier Linear Unit (pReLU) Layers
Rectifier Linear Unit (ReLU) Layers
Smooth Rectifier Linear Unit (SmoothReLU) Layers
Hyperbolic Tangent Layers
Exponential Linear Unit (ELU) Layers
which normalize the input data.
which prevent the neural network from overfitting.
which apply a form of non-linear downsampling to input data.
1D Max Pooling Layers
2D Max Pooling Layers
3D Max Pooling Layers
1D Average Pooling Layers
2D Average Pooling Layers
3D Average Pooling Layers
2D Stochastic Pooling Layers
2D Spatial Pyramid Pooling Layers
Convolutional and locally-connected layers,
which apply filters to input data.
which apply service operations to the input tensors.
which measure confidence of the output of the neural network.
which measure the difference between the output of the neural network and ground truth.
When using Intel DAAL neural networks, be aware of the following assumptions:
In Intel DAAL, numbering of data samples is scalar.
For neural network layers, the first dimension of the input tensor represents the data samples.
While the actual layout of the data can be different, the access methods of the tensor return the data in the assumed layout. Therefore, for a tensor containing the input to the neural network, it is your responsibility to change logical indexing of tensor dimensions so that the first dimension represents the sample data. To do this, use the shuffleDimensions() method of the Tensor class.
Several neural network layers listed below support in-place computation, which means the result rewrites the input memory under the following conditions:
- Both the input and the result are represented by a homogenous tensor of floating-point type identical to the algorithmFPType type used by the layer for intermediate computations.
- The input of the layer is unique, that is, it is not shared between multiple layers in the neural network topology (the inputs of the layers that have the split as preceding layer are shared).
- Required for the layers marked with (*) sign. The layers are used on prediction stage only, except the layers in the forward computation step of the training stage.
The following layers support in-place computation:
- Absolute Value (Abs) Forward Layer (*)
- Absolute Value (Abs) Backward Layer
- Logistic Forward Layer (*)
- Logistic Backward Layer
- Rectifier Linear Unit (ReLU) Forward Layer (*)
- Rectifier Linear Unit (ReLU) Backward Layer
- Hyperbolic Tangent Forward Layer (*)
- Hyperbolic Tangent Backward Layer
- Split Forward Layer