The following code snippet measures kernel execution using OpenCL™ profiling events (error handling is omitted):
g_cmd_queue = clCreateCommandQueue(…CL_QUEUE_PROFILING_ENABLE, NULL); clEnqueueNDRangeKernel(g_cmd_queue,…, &perf_event); clWaitForEvents(1, &perf_event); cl_ulong start = 0, end = 0; clGetEventProfilingInfo(perf_event, CL_PROFILING_COMMAND_START, sizeof(cl_ulong), &start, NULL); clGetEventProfilingInfo(perf_event, CL_PROFILING_COMMAND_END, sizeof(cl_ulong), &end, NULL); //END-START gives you hints on kind of “pure HW execution time” //the resolution of the events is 1e-09 sec g_NDRangePureExecTimeMs = (cl_double)(end - start)*(cl_double)(1e-06);
- The queue should be enabled for profiling (CL_QUEUE_PROFILING_ENABLE property) at the time of creation.
- You need to explicitly synchronize the operation using clFinish() or clWaitForEvents. The reason is that device time counters for the profiled command, are associated with the specified event.
This way you can profile operations on both Memory Objects and Kernels. Refer to the OpenCL™ 1.2 Specification for the detailed description of profiling events.
The host-side wall-clock time might return different results. For the CPU the difference is typically negligible.
The OpenCL™ 1.2 Specification at http://www.khronos.org/registry/cl/specs/opencl-1.2.pdf