optimisation routines

optimisation routines

Imagen de mido h.

I commonly work with problems of the form where a maximum needs to be evaluated for a non-linear user-defined function F(x), where x denotes either a scalar or vector of function inputs and an analytical description of the gradient does not exist.  I commonly my own routines for this purpose, but was wondering whether a better option is available through the MKL.  I have not been able to find any reference beyond the "non-linear least squares" procedure that requires the problem to be twice differentiable.  Any advice would be greatly appreciated,

publicaciones de 2 / 0 nuevos
Último envío
Para obtener más información sobre las optimizaciones del compilador, consulte el aviso sobre la optimización.
Imagen de mecej4

A common class of nonlinear least squares problems have the attribute that the final value of the sum of the squares of the functions is "small" in some relevant sense. That is, the data being "fitted" are correctly modelled by the set of functions being used.

 For nonlinear optimization where the final function norm is not 'small', the requirement of second-order differentiability is not absolute, because the computational algorithms gradually build up approximations to the derivatives.

See Prof. Mittelmann's Web page at http://plato.asu.edu/sub/nlounres.html for useful links.

Inicie sesión para dejar un comentario.