YetiSim's Application of Threading Building Blocks

In earlier posts I reviewed three of the four winning submissions to the "Coding with TBB" contest that was launched at OSCON and which completed on August 31. In this post I look at the fourth winning entry, AJ Guillon's YetiSim.

YetiSim is:

a discrete event simulation library for C++ built with Intel's Threading Building Blocks. YetiSim is an open source project, available under GPLv3. The objective of YetiSim is to provide a scalable and efficient simulation engine that is easy to use and understand.

One of the ways in which YetiSim stands out is documentation. The source code is well documented, and the YetiSim site includes class documentation generated by Doxygen. The source download [] also includes a Doxygen configuration file (Doxyfile), which lets you run Doxygen yourself to generate documentation that accurately reflects the YetiSim code you've downloaded. [Note that a Subversion repository will replace the current tar.gz file for downloads in the future.]

TBB component use in YetiSim

The YetiSim code I downloaded some weeks ago consists of about 3000 lines of code (*.h and *.cpp files). A grep of the *.cpp files for "tbb" reveals the application of a different set of Threading Building Blocks components from what we saw in the other contest-winning applications. YetiSim applies the TBB parallal_reduce(), concurrent_vector(), and blocked_range() templates (while also calling task_scheduler_init, of course).

Among the *.cpp files, all of the TBB-related code resides in the MasterScheduler.cpp file. If you'd like to a well-documented example of how to apply Threading Building Blocks within an application that was designed from scratch for implementation using TBB, looking at YetiSim's MasterScheduler.cpp is a good place to start. The code has plenty of comments, making it straightforward to follow the coding logic as you browse the MasterScheduler class.

tbb::parallel_reduce() in YetiSim

MasterScheduler::run() implements the TBB parallel_reduce() function in the following statement:

(threadsToProcess.begin(), threadsToProcess.end()),
executionObject, tbb::auto_partitioner());


    • tbb::blocked_range identifies a set of threads that will be divided by TBB into blocks for processing;

    • executionObject defines the body;

    • tbb::auto_partitioner() designates that the TBB auto_partitioner is to be applied to select the grainsize [note: the auto_partitioner is described as a "preview feature" in the TBB 2.0 Reference Guide].

YetiSim is intended as a generic platform that provides finite state machine processing for applications that are structured such that a state machine is a suitable approach. Hence, the executionObject can be applied to any number of problems.

tbb::concurrent_vector() in YetiSim

YetiSim applies the tbb::concurrent_vector() construct in MasterScheduler.cpp within MasterScheduler::performSerialOperations():

void MasterScheduler::performSerialOperations()
// Iterate over elements to remove, and remove them
i = _executionsToRemove.begin();

// Iterate over elements to add, and add them
i = _executionsToAdd.begin();

// Clear the concurrent containers

The application of the concurrent_vector (instead of a Standard Template Library vector) ensures thread safety for these operations.


The YetiSim project was designed from scratch with application of Threading Building Blocks in mind. In my conversations with YetiSim developer AJ Guillon (on the #tbb IRC channel, on, I learned that his study of TBB directly impacted YetiSim's design. Thinking about this, it makes a lot of sense. TBB's structure can indeed serve as a useful guideline, or template, for designing new multithreaded applications.

AJ Guillon's YetiSim, possibly the very first open source platform designed based on thoughtful study of the structural organization inherent in the Threading Building Blocks library, is a project that's worth looking into today, and following in the future.

Kevin Farnham

O'Reilly Media

TBB Open Source Community

For more complete information about compiler optimizations, see our Optimization Notice.