The join node in the Intel® Threading Building Blocks Graph Community Preview Feature

Intel recently introduced the new Graph Community Preview Feature as part of TBB 3.0 Update 5. The graph class and its associated nodes express a program as a series of computational nodes and communicating nodes passing messages along connecting edges.

Several nodes manage the passing of messages between computational nodes. These nodes do not alter the messages, but may store, duplicate or aggregate them. Some examples are buffer, queue, broadcast and join nodes.

A join node accepts two or more inputs (up to ten in the current implementation), creates a std::tuple from the items it receives, and attempts to broadcast the std::tuple to its successors. If one or more successors accept a std::tuple, the join consumes its inputs; otherwise it waits for a successor to request a std::tuple.

This is a broad description of the action a join node takes, and there are a few ways this action can be implemented. In Intel® Threading Building Blocks 3.0 Update 5, the policy the join node follows in handling its inputs is reserving:

    • For any input to the join, a predecessor may have an item to pass. If so, the join node reserves that predecessor (no other node may consume the item).

    • If the join cannot reserve an item at each of its predecessors, it “releases” any reserved inputs. This makes the reserved items available to other successors of that input.

    • If the join can reserve an item at each of its predecessors, then it creates a std::tuple of those items and attempts to pass the std::tuple to its successors.

    • If a successor accepts the std::tuple, the join node “consumes” the items from its predecessors; otherwise it “releases” all items, making them available to other nodes that may accept them.

To be used as a predecessor to an input of a reserving join node, a node must be reservable. A reservable node will buffer an item that no successor accepts, and if a successor join node reserves an item, the node will reject all other requests for items until the reserving join consumes the item or releases it. In the current release buffer, queue, priority queue, sequencer and source nodes are reservable. Other graph node types will pass an item to each of their successors that accepts the item, but if a successor does not immediately accept the item it is lost.

Example of reserving join nodes with queues

In the example above, join node 1 has items in each of its input queues, and so it can reserve them and build an output std::tuple to pass to its successor. Join node 2 has items at only one of its inputs, so it cannot attempt to build and pass a std::tuple to its successor.

In the dining philosophers problem Mike used this aspect of reserving join nodes to solve the potential deadlock problem. Each queue holds at most one chopstick, and the join node will not take a chopstick from the queue until it can reserve two chopsticks.

In Intel® Threading Building Blocks 3.0 Update 6, the graph preview also has a join node with a queueing join policy. Queueing join nodes have first-in first-out queues at each input, and so do not require reservable predecessors. If a queue node is succeeded by a queueing join node, the queue becomes a “pass-through” and doesn’t hold any items. The policy of a queueing join is

    • When a predecessor offers an item, the join node accepts it, putting it in a first-in, first-out queue for that input.

    • When there is at least one item in each of the input queues in the join node, it creates a std::tuple, and attempts to pass it to its successors.

    • If this is successful, the join node removes the items it used from each of its input queues.

  • If the join node cannot pass the tuple to a successor, the items remain in the front of the join node’s input queues.

A question presents itself: what would happen to the Dining Philosophers problem if the policy of the joins were changed from reserving to queueing? This question will be addressed in my next post.

For more complete information about compiler optimizations, see our Optimization Notice.