Getting Started Guide

Contents

Stochastic Average Gradient Accelerated Method

The Stochastic Average Gradient Accelerated (SAGA) [ Defazio2014 ] follows the algorithmic framework of an iterative solver with one exception.
The default method (
defaultDense
) of SAGA algorithm is a particular case of the iterative solver method with the batch size
b
=1.
Algorithmic-specific transformation
T
, set of intrinsic parameters
S
t
defined for the learning
η
rate, and algorithm-specific vector
U
and power
d
of Lebesgue space are defined as follows:
S
t
- a matrix of the gradients of smooth terms at point
, where
  • t
    is defined by the number of iterations the solver runs
  • stores a gradient of
:
Update of the set of intrinsic parameters
S
t
:
.
The algorithm enables automatic step-length selection if learning rate
η
was not provided by the user. Automatic step-length will be computed as
, where
L
- Lipschitz constant returned by objective function. If the objective function returns
nullptr
to numeric table with
lipschitzConstant
result-id, the library will use default step size '0.01'.
Convergence check:
  • ;

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804