The iterative solver provides an iterative method to minimize an objective function that can be represented as a sum of functions in composite form:
where is a convex, continuously differentiable (smooth) functions,
is a convex, non-differentiable (non-smooth) function
The Algorithmic Framework of an Iterative Solver
All solvers presented in the library follow a common algorithmic framework. Let S t be a set of intrinsic parameters of the iterative solver for updating the argument of the objective function. This set is the algorithm-specific and can be empty. The solver determines the choice of S 0.
To do the computations, iterate t from 1 until nIterations:
Choose a set of indices without replacementwhere b is the batch size.
Compute the gradientwhere
Stop if where U is an algorithm-specific vector (argument or gradient) and d is an algorithm-specific power of Lebesgue space.
Computewhere T is an algorithm-specific transformation that updates the function argument.
Update S t : S t = U(S t-1), where U is an algorithm-specific update of the set of intrinsic parameters.
The result of the solver is the argument and a set of parameters S . after the exit from the loop.
You can resume the computations to get a more precise estimate of the objective function minimum. To do this, pass to the algorithm the results and S . of the previous run of the optimization solver. By default, the solver does not return the set of intrinsic parameters. If you need it, set the optionalResultRequired flag for the algorithm.