optimisation routines

optimisation routines

I commonly work with problems of the form where a maximum needs to be evaluated for a non-linear user-defined function F(x), where x denotes either a scalar or vector of function inputs and an analytical description of the gradient does not exist.  I commonly my own routines for this purpose, but was wondering whether a better option is available through the MKL.  I have not been able to find any reference beyond the "non-linear least squares" procedure that requires the problem to be twice differentiable.  Any advice would be greatly appreciated,

Justin.

4 帖子 / 0 全新
最新文章
如需更全面地了解编译器优化,请参阅优化注意事项

Many optimization algorithms may contain in their description a requirement that the functions be twice-differentiable, but that does not imply that you have to provide code to actually compute those derivatives. Often, the Hessian is built up iteratively and updated through some approximation scheme, the success of which relies upon the existence of the derivatives.

See Prof. Mittelman's http://plato.asu.edu/sub/nlounres.html for links and information on software of the types that you asked about.

In a post by ArturGuzik on 03/02/2010, it is noted that:

In general, all optimization algorithms provided within MKL belong to gradient methods, and it means that you need to calculate derrivative (gradient) in order to obtain the next step and finally solution.

If you mean that you want MKL to minimize function using only its value(s) (don't need to know the function griadient, I guess that's what you had in mind) then the answer is no.

Do you know whether this is still the case?

There are no nonlinear optimization algorithms in MKL other than those in the NLS (nonlinear least squares) solvers. For the latter, there is a provision for computing the jacobian using finite-difference approximations. The user need only provide code to evaluate the vector function whose norm is to be minimized. However, if analytical derivative information is available and the derivatives can conveniently be evaluated, that would work better. See the example ex_nlsqp_f.f.

The comments by Guzik are about the NLS solvers and not for the case where the objective function is scalar.

发表评论

登录添加评论。还不是成员?立即加入