MKL Data fitting, log-linear interpolation

MKL Data fitting, log-linear interpolation


a log-linear Interpolation can be calculated by the MKL data fitting if one applies the toolbox to the log-scaled values and applied the exponential function to the result of dfdInterpolate1D.

What about the Integration? One has to apply the exponential function on each integration segment. My first idea was to use the call back mechanism of dfdIntegrateEx1D.

If I understand correctly I has to implement the integration by my-self. It is not a big deal, but at least performance improvements for the integration parts seems to reduce to managed code performance (I apply a .net wrapper for the MKL functionality and the call back method is managed code as well).

Kind regards

Markus Wendt

5 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

I can see your point: a non-linear interpolation applies a function to the input, performs the interpolation, and applies the inverse of the initial function to the final result. This is a lucky case since the input data can be transformed efficiently i.e., in a vectorized manner. However, an integration cannot simply transform the input, and back-transform the result of the integration. Hence a callback function is performed for each interval of the integration. This is already a kind of an elemental process as opposed to the vectorizable process presented above. In addition, the callback function is also managed code in your case. As a first step, would it be possible for you to provide this callback as a native function? As an alternative you could also wrap your non-linear integration scheme entirely into a native function.

Hi Markus,

Use of the callback mechanism with IntegrateEx1D in the scenario you described is one possible option. Assuming that you need to apply exponent function to each integration segment I wonder if the standard Integrate1D() routine would work for you as described below?

API of df[d|s]Integrate1D function allows you to compute nlim integrals over intervals whose boundaries are provided in arrays llim and rlim. If you specify boundaries of each segment you would get array of nlim "elementary" integrals. You then can apply transformation to this array using Intel(R) MKL  v[d|s]Exp() function or compiler exp() function. Finally, you accumulate transformed integrals using straightforward loop or mechanism of sample sums available in Intel MKL 11.1 Summary  Statistics.

To speed-up computation of the elementary integrals you can also specify hints (parameters llimhint and rlimhint) which describe structure of the integration limits. In this specific usage model they, at least, are expected to be sorted. 

Please, let us know if this helps or your usage model assumes different order of computations.  



Hi Markus,

Can you please help me to better understand why you can't use standard Integrate1D() routine without callback mechanism in your specific spline usage scenario?

Let x(i), i=1,...,n be  (log-scaled) breakpoints, and y(i) - function values. You compute the intergral over interval [t1,t2) where point t1 belongs to the cell [x(j), x(j(+1) ) and t2 - to [x(l),x(l+1) ). The intergal is decomposed into sum of elementary integrals over segments [t1,x(j+1) ), [x(j+1),x(j+2)),...[x(l), t2)). You need to apply exponentiation to such each elementary integral; summation of the transformed elementary integrals is the result you need. Does this correctly describe your computations?

If this is correct, the standard integration routine Integrate1D (which does not support callbacks) may be used to compute array of the elementary integrals over segments [x(i),x(i+1)), which then should be transformed and combined into the final result as described above. Before the integration you also need to compute indices of the cells that contain points t1 and t2 using SearchCells1D() API. No callback mechanism is necessary in this case. Exponentiation and summation of elementary integrals is done on the side of your application, using, for example, another API of Intel(R) MKL or compiler functions.


Hi Makrus,

thank you much for the clarification. Please, keep in mind the following considerations:

 - Performance aspects of either approach would be defined, in particular, by the problem size. If size of partition {x} is big enough your application might benefit from internal parallelization of the Integrate1D function

 - Hints about structure of integration limits provided to the routine (for example, integration limits are sorted, what should be the case in this specific scenario) would help the library to arrange the computation of the integrals in more effective way. 

 - Log/Exp transformation can be done in vectorized way.

Please, let us know if you run into any questions on use of Data Fitting functions. Feel free to share with us your performance experience for either approach, if possible.



Leave a Comment

Please sign in to add a comment. Not a member? Join today