Getting Started Guide

Contents

Batch Processing

Layer Input

The backward fully-connected layer accepts the input described below. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms.
Input ID
Input
inputGradient
Pointer to the tensor of size
n
k
x
m
that stores the input gradient computed on the preceding layer. This input can be an object of any class derived from
Tensor
.
inputFromForward
Collection of data needed for the backward fully-connected layer. This collection can contain objects of any class derived from
Tensor
.
Element ID
Element
auxData
Pointer to the tensor of size
n
1
x ... x
n
k
x ... x
n
p
that stores the input data from the forward fully-connected layer. This input can be an object of any class derived from
Tensor
.
auxWeights
Pointer to the tensor of size
n
1
x ... x
n
k
- 1
x
m
x
n
k
+ 1
x ... x
n
p
that stores a set of weights. This input can be an object of any class derived from
Tensor
.
auxBiases
Pointer to the tensor of size
m
that stores a set of biases. This input can be an object of any class derived from
Tensor
.
auxMask
Pointer to the tensor of size
n
1
x ... x
n
k
- 1
x
m
x
n
k
+ 1
x ... x
n
p
that holds 1 for the corresponding weights used in computations and 0 for the weights not used in computations. If no mask is provided, the library uses all the weights.

Layer Parameters

For common parameters of neural network layers, see Common Parameters.
In addition to the common parameters, the backward fully-connected layer has the following parameters:
Parameter
Default Value
Description
algorithmFPType
float
The floating-point type that the algorithm uses for intermediate computations. Can be
float
or
double
.
method
defaultDense
Performance-oriented computation method, the only method supported by the layer.
nOutputs
Not applicable
Number of layer outputs
m
. Required to initialize the algorithm.
propagateGradient
false
Flag that specifies whether the backward layer propagates the gradient.

Layer Output

The backward fully-connected layer calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms.
Result ID
Result
gradient
Pointer to the tensor of size
n
1
x ... x
n
k
x ... x
n
p
that stores the result of the backward fully-connected layer. This input can be an object of any class derived from
Tensor
.
weightDerivatives
Pointer to the tensor of size
n
1
x ... x
n
k
- 1
x
m
x
n
k
+ 1
x ... x
n
p
that stores result
Ε
/ ∂
w
i
1
...
j
...
i
p
of the backward fully-connected layer, where
j
= {1, ...,
m
}. This input can be an object of any class derived from
Tensor
.
biasDerivatives
Pointer to the tensor of size
m
that stores result
Ε
/
b
j
of the backward fully-connected layer, where
j
= {1, ...,
m
}. This input can be an object of any class derived from
Tensor
.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804