Unable to fix a bug, '32 bits int' problems ?

Unable to fix a bug, '32 bits int' problems ?

Hi,I'm working on the CLPP OpenCL Library and can't find a bug, so I request some engineering support from the OpenCL Intel Team... of course if possible.The goal is to develop a radix-sort algorithm, the problem is that I'm unable to sort on 16 bits and 32 bits.(All others are working fine).So, my questions are1) does the OpenCL 'int' representation is different than the 'Win 7 64 bits' int ? (I think no !)2) do I correctly generate random 'int' values for OpenCL ?3) can you help me to fix it ? I'm searching to fix it since a long time !BTW: The code is herehttp://code.google.com/p/clpp/You can change the "bit" sorting parameter in 'benchmark.cpp'Krys

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

We continue to provide support through this forum. Any report is commited, screening, and prioritized.We will solve bugs related to our implementation, but for that, you shall provide us with specific code that generates the bug and description of the bug. We will not debug a general OpenCL app as we have many of those.Regards,

Hello Polar01,

Answering your numbered questions:

1) Note that according to the OpenCL standard, an 'int' type is always "A signed twos complement 32-bit integer.", on all platforms that run OpenCL. You can also use the same type in your host application as "cl_int".
2) Similarly to any random-number-generation technique implemented in C. You can find any number of algorithms for doing this, in books and online. Usually RNGs depend on some OS-specific seeding (time, PIDs, user actions) - these can be provided from the host to the kernels. Alternatively a whole buffer of pre-generated random numbers can be provided from the host to the kernels. It all depends on the specific needs of your application.
3) It is not clear what it is that you want us to help you fix. Can you be more specific?

In general, wherever the behavior of the Intel OpenCL SDK doesn't match your expectations, please provide a minimal code sample and describe the behavior you expect vs. the one you're seeing.

Good luck

Thanks a lot for your replies,
I completely agree with you but to be honnest I have find no way to debug this problem.I generate my 'int' like I generate normal C++ int, so it should be the same I think ! (I'm on Win7 64bits)I have try several methods to generate my random numbers, but even with 'scoping' my values ( something like X=max(min(0, X), MAXVALUE) ) I still got wrong results ! (I have even use the way NVidia generates its numbers for their samples).So, maybe there is a problem in the way I generate my numbers... but I still think that OpenCL int are like C++ int ! So, I don't see the difference !What is strange, is that I have 2 sort algorithms, one for CPU and one for GPU, but I have the same problem with both ! There is not a lot of common stuffs... mainly the way I generate numbers.I don't know where the problem is coming from ... and because there is no OpenCL Debugger on the CPU I don't know how to process (except printf !).I don't know "what" can be "fixed" because I have no idea of the problem. Maybe it is a bug "I" have do and I just need some councils :-PBTW: to debug, you don't need to test the whole program, just the CPU radix-sort in the "benchmark.cpp" (This class is a unit testing class, so you can enable/disable each algorithm)ThanksKrys

Leave a Comment

Please sign in to add a comment. Not a member? Join today