Suppose I have a sequence of data chunks and several processors which have to sequentially operate on each chunk. The sequence in which the processors operate on each chunk is dynamic and may depend on the chunk itself, but I can always tell which processor should be the first to process every chunk. The chunks can be processed in parallel, but each chunk must only be accessed from a single thread (it doesn't support thread synchronization). I want to parallelize this case so that these data chunks processed concurrently.
At first glance, it fits quite nicely to the parallel_pipeline class, but here's the problem. Some of the processors must suspend processing of a chunk and wait for other events to continue (for instance, a blocking IO in another thread or some other event). While the chunk is suspended, the processor can operate on other chunks, so the worker thread doesn't have to block. When the event occurs, the processor must resume processing of the suspended chunk from where it stopped. So the quesions are:
1. Is it possible to suspend a task or a pipeline filter and not to block the worker thread?
2. If yes, how do I resume the processing?
3. Will performance suffer greatly at these task switches? I presume that such suspend/resume operations will be quite frequent.