Threading: Achieving Work-Group Level Parallelism
Since work-groups are independent, they can execute concurrently on
different hardware threads. So the number of work-groups should be not
less than the number of logical cores. A larger number of work-groups
results in more flexibility in scheduling, at the cost of task-switching
overhead.
Also notice that in the opposite case, when the number of work-groups
is relatively small, in compare to, for example the value of
CL_DEVICE_MAX_COMPUTE_UNITS
,
then even a small change in the work-groups amount can result in a significant
performance change.For example, if you run a number of work-groups that equals to
CL_DEVICE_MAX_COMPUTE_UNITS
,
then each compute unit process exactly one work-group. So in ideal conditions
all threads finish at the same time. Now consider the case, when work-group
size is changed, so that CL_DEVICE_MAX_COMPUTE_UNITS+1
work-groups
are executed instead. In such case, one thread does two times more job
than the others, which might double the overall execution time. Some inherent
threads divergence might hide the effect. The negative effect of “outstanding”
work-groups is less and less pronounced as the number of work-groups grows,
since imbalance is decreasing at a same pace.Notice that multiple cores of a CPU as well as multiple CPUs (in a multi-socket
machine) constitute a single OpenCL™ device. Separate cores are compute
units. The device fission extension enables you to control compute unit
use within a compute device. For more information on the device fission,
refer to the
OpenCL™ Device Fission for CPU Performance.
For the best performance and parallelism between work-groups, ensure
that execution of a work-group takes at least 100,000 clocks. A smaller
value increases the proportion of switching overhead compared to actual
work.