• 10/30/2018
  • Public Content
Contents

Work-Group Size Considerations

The recommended work-group size for kernels is multiple of 4, 8, or 16, depending on Single Instruction Multiple Data (SIMD) width for the
float
and
int
data type supported by CPU. The automatic vectorization module packs the work-items into SIMD packets of 4/8/16 items (for double as well) and processed the rest (“tail”) of the work group in a scalar way. In other words, a work-group with the size of
2*SIMD_WIDTH
executes faster than a work-group with the size of
2*SIMD_WIDTH-1
.
For example, a work group of 64 elements is assigned to one hardware thread. The thread iterates over work-items in a loop of 4 iterations with 16-wide vector instructions within each iteration. In some cases, the compiler may decide to loop (unroll) by 32 elements instead to expose more instruction-level parallelism.
It is recommended to let the OpenCL™ implementation automatically determine the optimal work-group size for a kernel: pass
NULL
for a pointer to the work-group size when calling
clEnqueueNDRangeKernel
.
If you want to experiment with work-group size, you need to consider the following:
  • To get best performance from using the vectorization module (see the Benefitting from Implicit Vectorization section), the work-group size must be larger or a multiple of 4, 8, or 16 depending on the SIMD width supported by CPU otherwise case the runtime can make a wrong guess of using the work-groups size of one, which results in running the scalar code for the kernel.
  • To accommodate multiple architectures, query the device for the
    CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE
    parameter by calling to
    clGetKernelWorkGroupInfo
    and set the work-group size accordingly.
  • To reduce the overhead of maintaining a workgroup, you should create work-groups that are as large as possible, which means 64 and more work-items. One upper bound is the size of the accessed data set as it is better not to exceed the size of the L1 cache in a single work group. Also there should be sufficient number of work-groups, see the Work-Group Level Parallelism section for more information.
  • If your kernel code contains the barrier instruction, the issue of work-group size becomes a tradeoff. The more local and private memory each work-item in the work-group requires, the smaller the optimal work-group size is. The reason is that a barrier also issues copy instructions for the total amount of private and local memory used by all work-items in the work-group in the work-group since the state of each work-item that arrived at the barrier is saved before proceeding with another work-item. Make the work-group size be multiple of 4, 8, or 16, otherwise the scalar version of the resulted code might execute.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804