optimisation routines

optimisation routines

Portrait de mido h.

I commonly work with problems of the form where a maximum needs to be evaluated for a non-linear user-defined function F(x), where x denotes either a scalar or vector of function inputs and an analytical description of the gradient does not exist.  I commonly my own routines for this purpose, but was wondering whether a better option is available through the MKL.  I have not been able to find any reference beyond the "non-linear least squares" procedure that requires the problem to be twice differentiable.  Any advice would be greatly appreciated,

2 posts / 0 nouveau(x)
Dernière contribution
Reportez-vous à notre Notice d'optimisation pour plus d'informations sur les choix et l'optimisation des performances dans les produits logiciels Intel.
Portrait de mecej4

A common class of nonlinear least squares problems have the attribute that the final value of the sum of the squares of the functions is "small" in some relevant sense. That is, the data being "fitted" are correctly modelled by the set of functions being used.

 For nonlinear optimization where the final function norm is not 'small', the requirement of second-order differentiability is not absolute, because the computational algorithms gradually build up approximations to the derivatives.

See Prof. Mittelmann's Web page at http://plato.asu.edu/sub/nlounres.html for useful links.

Connectez-vous pour laisser un commentaire.