Question about spin_rw_mutex v3 implementation

Question about spin_rw_mutex v3 implementation


I'm probably missing something.

I don't understand why in internal_try_acquire_writer() & internal_acquire_writer(), it checks against WRITER | READERS, but not against WRITER_PENDING?

Won't this cause the following CAS operation to replace the value with WRITER, potentially erasing WRITER_PENDING bit?

Assuming the WRITER lock is released before any new WRITER_PENDING get set, any 'reader' could get set before the 'pending writer' (whose bit got erased).

Is sthg else in the code preventing this scenario from happening, or is it an expected / acceptable behavior?

Thanks in advance,

Maxime H.


9 Beiträge / 0 neu
Letzter Beitrag
Nähere Informationen zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.

If you look again, it's worse than that: the WRITER_PENDING flag is always cleared when a write lock is released, i.e., even if a new WRITER_PENDING was set.

But at least some writer got in there to do its thing, it's not as if readers would starve writers.

(Added) If the flag were not cleared as part of acquiring the lock (unless the contender itself had previously set it), or as part of releasing it, the odds of a write contender prevailing over a read contender would increase to some degree, but a later read contender could still get in there before an earlier write contender, because it's just a flag, not a count. Such a change is very easy to make, though, which raises the question how much the odds would be increased, and how beneficial or deleterious that would be. We wouldn't want writers to starve readers, either...

(Added) Perhaps it would work if the flag were only set if an internal_acquire_writer() attempt failed because of current reader ownership?

(Added) Clarification about "unless the contender itself had previously set it": just or'ing would be enough, it wouldn't have to be the first to actually change the bit from 0 to 1.

Less talk, more code (not clearing in release is simple enough):

    //! Acquire write lock on the given mutex.
    bool spin_rw_mutex_v3::internal_acquire_writer()
        ITT_NOTIFY(sync_prepare, this);
        state_t mask = WRITER_PENDING; // optionally preserve, but only if it certainly did not originate here
        for( internal::atomic_backoff backoff;;backoff.pause() ){
            const state_t s = tbb::internal::as_atomic(state).load<tbb::relaxed>();
            if( !(s & BUSY) ) { // no readers, no writers
                const state_t intended = (s & mask) | WRITER;
                if( tbb::internal::as_atomic(state).compare_and_swap<tbb::acquire>(intended, s)==s )
                    break; // successfully stored writer flag
                backoff.reset(); // we could be very close to complete op.
            } else if( !(s & (WRITER_PENDING | WRITER)) ) {
                // BUSY without WRITER means that at least one read lock is being held
                // no WRITER_PENDING means that __TBB_AtomicOR() will not be wasted
                __TBB_AtomicOR(&state, WRITER_PENDING); // TODO: relaxed
                mask = 0; // WRITER_PENDING now (probably) originated here
        ITT_NOTIFY(sync_acquired, this);
        return false;

    //! Try to acquire write lock on the given mutex
    bool spin_rw_mutex_v3::internal_try_acquire_writer()
        // for a writer: only possible to acquire if no active readers or writers
        const state_t s = tbb::internal::as_atomic(state).load<tbb::relaxed>();
        if( !(s & BUSY) ) // no readers, no writers
            if( tbb::internal::as_atomic(state).compare_and_swap<tbb::acquire>(s | WRITER, s)==s ) {
                ITT_NOTIFY(sync_acquired, this);
                return true; // successfully stored writer flag
        return false;


Going back to the original question, a write contender must repeatedly set the WRITER_PENDING flag, because it is just a flag and not a count. It is therefore "acceptable" for any writer to clear the flag at any time during ownership, even if it had never even tried to set it.

Whether it is also "expected" is a different matter. In the current situation all writers seem to clear it at both acquisition and release time (not downgrade), but this is still enough to avoid starvation of writers by readers. With my change, several writers at a time may get in before the one that set the flag or one that thinks it set the flag clears it and odds even out again.

Which behaviour is better maybe depends on the situation, and may require testing rather than arguing that it is sad if a long-suffering write contender is eclipsed by a newer contender that swoops in first. Life is unfair in TBB land, and if fairness is required you have to ask for it and pay the price (potentially less throughput performance), e.g., by using a queuing_rw_mutex instead.

But it's always interesting to think about what's going on here, and perhaps others can offer further insight.


Think about this for a moment before you object.

In the same cache line as the state flag that is CAS'ed add a volatile int counter, initialized to 0. On writer acquire fail non-atomically ++ the value and the result being the approximate contender number. Note, it is not necessary to have this atomically incremented as the consequence of a race condition is relatively benign (two threads receiving same contender number and being no worse that it is now). The contender number can be used to bias the pecking order (you devise how). When the winner releases the lock it can observe the pecking order count, and when equal to its count, simply zero it out. This too has a race condition, which is benign as well. The scheme should provide an approximate priority without the expense of CAS (or atomic ++/--).

Jim Dempsey

Much depends on the mechanics of prioritisation, I suppose, but increasing fairness with low overhead always seems attractive, at first sight anyway.

Still, after ample consideration ;-) , I have the following objection: TBB spin locks' backoff strategy causes LIFO tendencies, and this could easily translate into a severe reduction in performance if combined blindly with this proposal.

(Added) Note that the time of this posting shows when I initially reacted with "I object! (now thinking...)", and I don't remember after exactly how much consideration I replaced that with the more serious reply above. Also note that there is a cost to incrementing a ticket number, even if it is not as high as an atomic update (especially with implicit sequential consistency), and there may still be other unknown costs related to implicit fairness. I mainly intuit that it is beneficial to aggregate updates before further read-only accesses, but without necessarily aiming to preserve relative order among those updates. I'm less fond of actually measuring things myself, though... My rationale for that is that there would be too much confirmation bias. :-)

I am intuiting too...

Presumably most of the circumstances under which a lock is requested is from a peer thread (equal task priority). An example of this might be an update of or addition to or removal of and object. As long as some progress is made, fairness is not an issue.

In other circumstances, you may have a recursive cascade type of thing where fairness (FIFO) is the last thing you need (e.g. producer overload), and in which LIFO order may be more beneficial. FIFO/LIFO with respect to thread/task access as opposed to item order.

In constructing for example a queue object MPMC, it would be advantageous to have two mutexes and where a producer and consumer can have full concurrence without interference. In this manner the queue could not get stuck in fill mode nor in empty mode. Each of these could then have a fairness bias (either FIFO or LIFO as desired).

BTW a few years ago Dmitry Vyukov had a good post here on one of the forums and/or blog. With a little bit of searching you or Maxime might be able to locate it.

Jim Dempsey

Thanks for all the insight.

My comment was more a theoretical one as the use case in our code base aren't with multiple writers, but I'll keep in mind the suggestions in case we have a situation which requires prioritizing multiple writers a little more over readers.

I've read a few articles from Dmitry Vyukov some time ago about MPMC bounded queue implementation possibilities through his website, but I'll need to go over it some time again.

Thanks again.

Maxime H.

Since the cost of waiting for the flag setter (ignoring the occasional delusional thread that only thinks it set the flag) in my change above is unknown, I wouldn't recommend it at this time. We already have a mutex that can exhibit convoying while nobody is holding the lock (blocking of readers by a mere write contender), and not letting the first writer to gain access subsequently free up the mutex for readers would exacerbate that situation without an obvious upside of at least equal magnitude.

That said, I would recommend that the flag not be set if a write contender is blocked by another writer. Even though this doesn't let any class be starved by another (the flag is cleared when the current writer gives up the lock), it is an unnecessary read-modify-write in a situation where no starvation is being averted.

(2014-06-22 Added) Well, in the current implementation the "mere write contender" will either become a writer or give up its turn to another of its class, so an episode of convoying would never go to waste in that sense, but it's also not just rhetorically, because a blocked write contender might well be blocked long enough to decide to yield (considering that, typically, there are many read accesses compared to write accesses), and it might turn out that some further tuning to delay that moment would help there, I don't know...

Kommentar hinterlassen

Bitte anmelden, um einen Kommentar hinzuzufügen. Sie sind noch nicht Mitglied? Jetzt teilnehmen