Constrained optimization with DAAL

Constrained optimization with DAAL

Hello all,

Can someone help me to answer  my question: if  DAAL  is good for convex constrained optimization? 

As stated in the article ( https://software.intel.com/en-us/daal-programming-guide-objective-function), the proximal operator there could  be used for non-smooth part of objective function, and the example (https://software.intel.com/en-us/daal-programming-guide-logistic-loss) shows this for L1 regularization. On the other hand,  if non-smooth  part M(theta)  is just an indicator (characteristic)  function of some convex set (constraints) , the proximal operator is exactly projection operator.

Is it possible to pass this projection operator to objective function object to handle convex constraints in that way?

Thanks! Your help is much appreciated,

Dmitry.

5 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

While DAAL provides some set of objective functions out-of-the-box, it also supports user-provided objective functions as well. So, if you need to use optimization solver with custom objective function, you should provide custom objective function and pass it into appropriate optimization solver.

If your objective function is similar to any one, which DAAL provides, you can use inheritance to simplify the implementation. 

Please also note that not every optimization solver supports objective functions with non-smooth part.

Could you please provide some details about your specific projection operator, including kind of constraints you use?

Hi Mikhail,

Thanks for the reply. 

My optimization problem is as follows:

f(x) -> min 

with convex constraints C =  { x |  A*x >= 0} , A is an matrix,  x € Rn  

The target function  f is one from DAAL out-of-the-box functions (logistic regression) 

The tricky part is the constraints. My idea was to introduce non smooth part as an indicator function of set C, specifically

IC(x) = { 0 if x€C,  otherwise +infinity}, in that way the original problem is deduced to the unconstrained optimization problem 

f(x) +  IC(x) -> min 

The proximal operator   proxη (https://software.intel.com/en-us/daal-programming-guide-objective-function)

of the IC(x) is just euclidean projection on the convex set C.  I can implement that projection as prox : Rn -> R function.

The question is: is it possible to pass or override the proximal operator (projection in my case) within optimization solver?

If this approach does not work what would be the other way the constrained problem f(x) -> min,  x€C could be solved with DAAL?

Thanks and regards,

Dmitry

Hi, Dmitry.

With current DAAL API it`s impossible to pass or override the proximal projection within DAAL solver (or within any DAAL implemented objective functions).

As it was pointed above by Mikhail you should create your own custom objective function with proximal projection which you need (example for creation of custom objective function you can find in src/examples/cpp/source/optimization_solvers/custom_obj_func.h).

Also you can simplify this task by inheritance from daal::algorithms::optimization_solver::logistic_loss::BatchContainer and daal::algorithms::optimization_solver::logistic_loss::Batch classes. You need to override method Batch::initialize() with creation your inherited BatchContainer. And implement BatchContainer::compute similar to LogLossKernel<algorithmFPType, method, cpu>::doCompute method (see src/algorithms/kernel/objective_function/logistic_loss/logistic_loss_dense_default_batch_impl.i).

Your own implementation for 'compute' method can be the same for value/hessian/gradient/lipschitzConstant but proximal projection computation can be implemented exactly as you need (DAAL proximal projection you can find src/algorithms/kernel/objective_function/logistic_loss/logistic_loss_dense_default_batch_impl.i:145).

Hi Kirril,

Thanks for the information.

Could you please clarify how to change implementation of the compute() function in the custom_obj_func.h to add projection? I tried to do it exactly as for the gradient computation - it did not work.

Thanks, I really appreciate your help.

Dmitry.

 

Leave a Comment

Please sign in to add a comment. Not a member? Join today