Synching on blocking buffer reads and on clFinis() gives drastically different CPU load

Synching on blocking buffer reads and on clFinis() gives drastically different CPU load

I develop app that requires data to be transferred back to host almost after each kernel call (some flag returned).

Usuall I do processing in such way:

enqueueKernel(cq,..);
readBuffer(cq,..true);

So,  queue synched on blocking read. This works OK on AND GPUs/APUs with few % CPU load, but on Intel GPU this leads to constant 100% CPU usage (app fully use 1 CPU core constantly).

When I tried such sequence:

enqueueKernel(cq);
clFinish(cq);
readBuffer(cq,...,true,...);

CPU load was dropped considerably. So, looks like synching on clFinish() and on blocking buffer read works quite different for Intel OpenCL runtime. Why so? Does this in agreement with OpenCL standart ?  

3 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Well, things look even more strange actually.

CPU usage changes (decreases) when I put additional synching points (i.e., clFinish(cq); calls) even between kernel enqueues.

So, 

clEnqueueNDRangeKernel(cq,kernel1,...);

clEnqueueNDRangeKernel(cq,kernel2,...);

will consume more CPU (but with less overall execution time, kernels executed on GPU of course) than

clEnqueueNDRangeKernel(cq,kernel1,...);

clFinish(cq);

clEnqueueNDRangeKernel(cq,kernel2,...);

Any comments from OpenCL runtime developing team? 

Hi,

Both the blocking read and the clFinish() have similar performance. The behavior is not identical but you shouldn't see too much perf difference. Is it possinble to provide a repro?

Thanks,
Raghu

Leave a Comment

Please sign in to add a comment. Not a member? Join today