I see that if I use IPP functions (i.e. ippsMulC_64f_I) with Ipp64f or double, it seems the same. No warnings, same performances.
What's the benefit of using the IPP's native Data Types?
I hate every lib defining its types, instead of using standard uin32_t, etc.
The benefits are in cross-platform support and unification. Different operation systems and compilers can have different types definition. To fix this potential problem many libraries and SDKs introduce their own types definition.
For example: Ipp64s can be defined as "__int64" for Microsoft or "long long" for GCC.
Pavel, what's the downsize of ipp adopting standard int64_t?
I understand your point and in the modern reality it looks logically to use native data types, but it required many years to become to the current situation where we have native and standard data types aligned on all OSes. int64_t was introduced with C99 standard in 2000 year, IPP library start point is early and IPP library supports ANSI C and C99 standards.
So in general it is historical architecture that we follow.
But if I use double instead of Ipp64f, are the performance differences?
Or is it only a problem of definitions across different OSes?
If you use double instead of Ipp64f - no any performance difference shall be. IPP data types are introduced for alignment across different OSes. If you see in ippbase.h you can see:
typedef double Ipp64f;
I think Pavel provided clear answers. Do you have any other question or need more support regarding this matter?
Yes :) In this case, since I use float or double, it seems I can simply use double and float ;) They define them equal on different machines/systems... right?