CLRS III: Extension of the Threads

I've got a great wife. For my birthday she got me a copy of the newly updated Introduction to Algorithms, 3rd ed. by Cormen, Leiserson, Rivest, and Stein. (Amazon.com says this book dropped last week, but, when I checked as I wrote this, the MIT Press site says I shouldn't even be able to have a copy in the USA or Canada.)

Flipping through the pages, I noticed Chapter 27, Multithreaded Algorithms. That's new.

Reading more, I discovered the chapter uses a dynamic multithreading model. This model supports two features: nested parallelism and parallel loops. There are only three keywords to use for this model: parallel (to execute for-loop iterations in parallel), spawn (create a new task/thread), and sync (wait for all spawned children). The set of keywords reminds me of Cilk. (The influence of Cilk and Cilk++ is admitted at the end of the chapter.)

The authors state that the model allows programmers to "specify parallelism in applications without worrying about communication protocols, load balancing, and other vagaries of static-thread programming." This is a more practical model for parallel programming than something like PRAM's and is very similar to how parallel programming is done in OpenMP, Intel Threading Building Blocks, Microsoft's Task Parallel Library, and Cilk++. After covering some basics about the model, the chapter develops a matrix-matrix multiplication algorithm and a merge sort algorithm using dynamic multithreading.

I look forward to the day when Introduction to Algorithms is 1200 pages of only parallel algorithms. (My meager attempt, The Art of Concurrency, pales in comparison to such a possible magnum opus.) And how sweet of an anniversary gift would that be from my wife?

For more complete information about compiler optimizations, see our Optimization Notice.