A partitioner specifies how a loop template should partition its work among threads.


The default behavior of the loop templates parallel_for, parallel_reduce, and parallel_scan tries to recursively split a range into enough parts to keep processors busy, not necessarily splitting as finely as possible. An optional partitioner parameter enables other behaviors to be specified, as shown in the table below. The first column of the table shows how the formal parameter is declared in the loop templates. An affinity_partitioner is passed by non-const reference because it is updated to remember where loop iterations run.



Loop Behavior

const auto_partitioner& (default)

Performs sufficient splitting to balance load, not necessarily splitting as finely as Range::is_divisible permits. When used with classes such as blocked_range, the selection of an appropriate grainsize is less important, and often acceptable performance can be achieved with the default grain size of 1.


In Intel® Threading Building Blocks (Intel® TBB) 2.1, simple_partitioner was the default. Intel® TBB 2.2 changed the default to auto_partitioner to simplify common usage of the loop templates. To get the old default, compile with the preprocessor symbol TBB_DEPRECATED=1.


Similar to auto_partitioner, but improves cache affinity by its choice of mapping subranges to worker threads. It can improve performance significantly when a loop is re-executed over the same data set, and the data set fits in the cache.

affinity_partitioner uses proportional splitting when it is enabled for a Range.

const simple_partitioner&

Recursively splits a range until it is no longer divisible. The Range::is_divisible function is wholly responsible for deciding when recursive splitting halts. When used with classes such as blocked_range, the selection of an appropriate grainsize is critical to enabling concurrency while limiting overheads (see the discussion in the blocked_range Template Class section).

Para obter informações mais completas sobre otimizações do compilador, consulte nosso aviso de otimização.