Queues are widely used in parallel programs to buffer consumers from producers. Before using an explicit queue, however, consider using parallel_do or pipeline instead. These options are often more efficient than queues for the following reasons:
A queue is inherently a bottle neck, because it must maintain first-in first-out order.
A thread that is popping a value may have to wait idly until the value is pushed.
A queue is a passive data structure. If a thread pushes a value, it could take time until it pops the value, and in the meantime the value (and whatever it references) becomes "cold" in cache. Or worse yet, another thread pops the value, and the value (and whatever it references) must be moved to the other processor.
In contrast, parallel_do and pipeline avoid these bottlenecks. Because their threading is implicit, they optimize use of worker threads so that they do other work until a value shows up. They also try to keep items hot in cache. For example, when another work item is added to a parallel_do, it is kept local to the thread that added it unless another idle thread can steal it before the "hot" thread processes it. This way, items are more often processed by the hot thread.