Batch Processing

Layer Input

The backward batch normalization layer accepts the input described below. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms.

Input ID

Input

inputGradient

Tensor of size n1 x n2 x ... x np that stores the input gradient computed on the preceding layer. This input can be an object of any class derived from Tensor.

inputFromForward

Collection of data needed for the backward batch normalization layer.

Element ID

Element

auxData

Tensor of size n1 x n2 x ... x np that stores the input data for the forward batch normalization layer. This input can be an object of any class derived from Tensor.

auxWeights

One-dimensional tensor of size nk that stores weights for scaling ω (k) from the forward batch normalization layer. This input can be an object of any class derived from Tensor.

auxMean

One-dimensional tensor of size nk that stores the mini-batch mean computed in the forward step. This input can be an object of any class derived from Tensor.

auxStandardDeviation

One-dimensional tensor of size nk that stores the population standard deviation computed in the forward step. This input can be an object of any class derived from Tensor.

auxPopulationMean

One-dimensional tensor of size nk that stores the population mean computed in the forward step. This input can be an object of any class derived from Tensor.

auxPopulationVariance

One-dimensional tensor of size nk that stores the population variance computed in the forward step. This input can be an object of any class derived from Tensor.

Layer Parameters

For common parameters of neural network layers, see Common Parameters.

In addition to the common parameters, the backward batch normalization layer has the following parameters:

Parameter

Default Value

Description

algorithmFPType

float

The floating-point type that the algorithm uses for intermediate computations. Can be float or double.

method

defaultDense

Performance-oriented computation method, the only method supported by the layer.

epsilon

0.00001

Constant added to the mini-batch variance for numerical stability.

dimension

1

Index of dimension k for which normalization is performed.

propagateGradient

false

Flag that specifies whether the backward layer propagates the gradient.

Layer Output

The backward batch normalization layer calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms.

Result ID

Result

gradient

Tensor of size n1 x n2 x ... x np that stores result z of the backward batch normalization layer. This input can be an object of any class derived from Tensor.

weightsDerivatives

One-dimensional tensor of size nk that stores result ∂Ε / ∂ω (k) of the backward batch normalization layer. This input can be an object of any class derived from Tensor.

biasesDerivatives

One-dimensional tensor of size nk that stores result ∂Ε / ∂β(k) of the backward batch normalization layer. This input can be an object of any class derived from Tensor.

For more complete information about compiler optimizations, see our Optimization Notice.
Select sticky button color: 
Orange (only for download buttons)