Iterative Solver
The iterative solver provides an iterative method to minimize an objective function
that can be represented as a sum of functions in composite form
where:
,
, where
is a convex, continuously differentiable
(smooth) functions,
i = 1, …, nis a convex, non-differentiable (non-smooth) function
The Algorithmic Framework of an Iterative Solver
All solvers presented in the library follow a common algorithmic framework.
Let
be a set of intrinsic parameters of the iterative solver for updating the argument of the objective function.
This set is the algorithm-specific and can be empty. The solver determines the choice of
.
To do the computations, iterate
:
t
from 1
until
- Choose a set of indices without replacement
,
,
, where
bis the batch size. - Compute the gradient
where
- Convergence check:Stop if
where
Uis an algorithm-specific vector (argument or gradient) and d is an algorithm-specific power of Lebesgue space - Compute
using the algorithm-specific transformation
Tthat updates the function’s argument: - Update
where
Uis an algorithm-specific update of the set of intrinsic parameters.
The result of the solver is the argument
and a set of parameters
after the exit from the loop.
You can resume the computations to get a more precise estimate of the objective function minimum.
To do this, pass to the algorithm the results
and
of the previous run of the optimization solver.
By default, the solver does not return the set of intrinsic parameters.
If you need it, set the
optionalResultRequired
flag for the algorithm.