Developer Guide

Contents

Training

Algorithm Input

Neural network training in the batch processing mode accepts the following input. Pass the Input ID as a parameter to the methods that provide input for your algorithm. For more details, see Algorithms .
Input ID
Input
data
Pointer to the tensor of size
n
1
x
n
2
x ... x
n
p
that stores the neural network input data. This input can be an object of any class derived from
Tensor
.
groundTruth
Pointer to the tensor of size
n
1
that stores stated results associated with the input data. This input can be an object of any class derived from
Tensor
.

Algorithm Parameters

Neural network training in the batch processing mode has the following parameters:
Parameter
Default Value
Description
algorithmFPType
float
The floating-point type that the algorithm uses for intermediate computations. Can be
float
or
double
.
method
defaultDense
Performance-oriented computation method.
batchSize
1
The number of samples simultaneously used for training.
Because the first dimension of the input data tensor represents the data samples, the library computes the number of batches by dividing
n
1
by the value of
batchSize
.
After processing each batch the library updates the parameters of the model. If
n
1
is not a multiple of
batchSize
, the algorithm ignores data samples at the end of the data tensor.
optimization
Solver
SharedPtr< optimization_solver::sgd::Batch< algorithmFPType, defaultDense >>
The optimization procedure used at the training stage.
engine
SharePtr<engines::mt19937::Batch>()
Pointer to the engine to be used by a neural network in computations. The neural network sets this engine to each layer in topology during model initialization if the layer's engine is not set yet.

Algorithm Output

Neural network training in the batch processing mode calculates the result described below. Pass the Result ID as a parameter to the methods that access the results of your algorithm. For more details, see Algorithms .
Result ID
Result
model
Trained model with the optimum set of weights and biases. The result can only be an object of the
Model
class.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804