I've been curious about Threading Building Blocks atomic operations ever since I learned that they exist. In my 14 years of developing multithreaded applications on Unix and Windows systems, I never came across that kind of construct. Does something similar exist in traditional threading libraries, but I just didn't know about it because I was working at a much higher level as I threaded previously unthreaded applications?
James Reinders begins his discussion of atomic operations in his book "Intel Threading Building Blocks" as follows:
Atomic operations are a fast and relatively easy alternative to mutexes. They do not suffer from the deadlock and convoying problems [that are possible with mutexes]
How could such a thing be possible? That's what I wondered, having years of experience working with mutexes.
TBB atomic operation benefits and limitations
Well, the answer is a bit complicated. Atomic operations do appear to represent a "freebie" in terms of multithreaded software development, in that they are a simple means to guarantee thread safety for code that would normally not be threadsafe. But, there is a caveat: atomic operations are severely limited in terms of the situations where they can be applied:
The main limitation of atomic operations is that they are limited in current computer systems to fairly small data sizes: the largest is usually the size of the largest scalar, often a double-precision floating-point number.
In the applications I've worked on, such an operation is too fine-grained to be of success-defining import. I was typically able to accomplish sufficient multithreading of applications through finding an appropriate loop that could be parallelized to fully utilize the available processors.
My interest in Threading Building Blocks atomic operations was increased when I saw that the TBB-threaded version of Intel's Destroy the Castle uses TBB atomic operations in many places. Naturally, I wondered why.
The Threading Building Blocks Tutorial (available on the TBB Documentation page) tells us:
When a thread performs an atomic operation, the other threads see it as happening instantaneously. The advantage of atomic operations is that they are relatively quick compared to locks, and do not suffer from deadlock and convoying. ... you should not pass up an opportunity to use an atomic operation in place of mutual exclusion. ... A classic use of atomic operations is for thread-safe reference counting.
Fundamental operations on atomic<T> variables
I'll conclude this post with a list of the fundamental operations that can be applied to a varible
x of type
atomic<T> (from the TBB Tutorial):
|read the value of |
|write the value of |
Because these operations happen atomically, they can be used safely without mutual exclusion.
At first glance, this set of operations can seem limiting. But it's fairly easy for me to think back on 14 years of coping with parallelizing code that was written for a single processor and come up with multiple situations where the existence of an atomic operation would likely have been beneficial, saving significant programming and debugging grief compared with traditional threading.
Naturally, I intend to investigate TBB's atomic operations further!
TBB Open Source Community