Non-Preemptive Priorities


Choose the next work item to do, based on priorities.


The scheduler in Intel® Threading Building Blocks (Intel® TBB) chooses tasks using rules based on scalability concerns. The rules are based on the order in which tasks were spawned or enqueued, and are oblivious to the contents of tasks. However, sometimes it is best to choose work based on some kind of priority relationship.


  • Given multiple work items, there is a rule for which item should be done next that is not the default Intel® TBB rule.

  • Preemptive priorities are not necessary. If a higher priority item appears, it is not necessary to immediately stop lower priority items in flight. If preemptive priorities are necessary, then non-preemptive tasking is inappropriate. Use threads instead.


Put the work in a shared work pile. Decouple tasks from specific work, so that task execution chooses the actual piece of work to be selected from the pile.


The following example implements three priority levels. The user interface for it and top-level implementation follow:

enum Priority {
template<typename Func>
void EnqueueWork( Priority p, Func f ) {
   WorkItem* item = new ConcreteWorkItem<Func>( p, f );

The caller provides a priority p and a functor f to routine EnqueueWork. The functor may be the result of a lambda expression. EnqueueWork packages f as a WorkItem and adds it to global object ReadyPile.

Class WorkItem provides a uniform interface for running functors of unknown type:

// Abstract base class for a prioritized piece of work.
class WorkItem {
   WorkItem( Priority p ) : priority(p) {}
   // Derived class defines the actual work.
   virtual void run() = 0;
   const Priority priority;
template<typename Func>
class ConcreteWorkItem: public WorkItem {
   Func f;
   /*override*/ void run() {
       delete this;
   ConcreteWorkItem( Priority p, const Func& f_ ) :
       WorkItem(p), f(f_)

Class ReadyPile contains the core pattern. It maintains a collection of work and fires off tasks that choose work from the collection:

class ReadyPileType {
   // One queue for each priority level
   tbb::concurrent_queue<WorkItem*> level[P_Low+1];
   void add( WorkItem* item ) {
       tbb::task::enqueue(*new(tbb::task::allocate_root()) RunWorkItem);
   void runNextWorkItem() {
       // Scan queues in priority order for an item.
       WorkItem* item=NULL;
       for( int i=P_High; i<=P_Low; ++i )
           if( level[i].try_pop(item) )
ReadyPileType ReadyPile;

The task enqueued by add(item) does not necessarily execute that item. The task executes runNextWorkItem(), which may find a higher priority item. There is one task for each item, but the mapping resolves when the task actually executes, not when it is created.

Here are the details of class RunWorkItem:

class RunWorkItem: public tbb::task {
   /*override*/tbb::task* execute(); // Private override of virtual method
tbb::task* RunWorkItem::execute() { 
   return NULL;

RunWorkItem objects are fungible. They enable the Intel® TBB scheduler to choose when to do a work item, not which work item to do. The override of virtual method task::execute is private because all calls to it are dispatched via base class task.

Other priority schemes can be implemented by changing the internals for ReadyPileType. A priority queue could be used to implement very fine grained priorities.

The scalability of the pattern is limited by the scalability of ReadyPileType. Ideally scalable concurrent containers should be used for it.