# Iterative Solver

The iterative solver provides an iterative method to minimize an objective function that can be represented as a sum of functions in composite form:

where:

• where is a convex, continuously differentiable (smooth) functions,

• is a convex, non-differentiable (non-smooth) function

## The Algorithmic Framework of an Iterative Solver

All solvers presented in the library follow a common algorithmic framework. Let S t be a set of intrinsic parameters of the iterative solver for updating the argument of the objective function. This set is the algorithm-specific and can be empty. The solver determines the choice of S 0.

To do the computations, iterate t from 1 until nIterations:

1. Choose a set of indices without replacement

where b is the batch size.

2. Compute the gradient

where

3. Convergence check:

Stop if where U is an algorithm-specific vector (argument or gradient) and d is an algorithm-specific power of Lebesgue space.

4. Compute

where T is an algorithm-specific transformation that updates the function argument.

5. Update S t : S t = U(S t-1), where U is an algorithm-specific update of the set of intrinsic parameters.

The result of the solver is the argument and a set of parameters S . after the exit from the loop.

### Note

You can resume the computations to get a more precise estimate of the objective function minimum. To do this, pass to the algorithm the results and S . of the previous run of the optimization solver. By default, the solver does not return the set of intrinsic parameters. If you need it, set the optionalResultRequired flag for the algorithm.

For more complete information about compiler optimizations, see our Optimization Notice.