affinity_partitioner shared between for loops?

affinity_partitioner shared between for loops?

Is the following a valid use, if the first parallel_for works on data that the second parallel_for will use. I haven't seen any use of affinity_partitionershared between loops, which is why I was a bit unsure.

tbb::affinity_partitioner ap;   
  
std::vector input;
std::generate_n(std::back_inserter(input), 4096, []{return rand();});

std::vector result(input.size();
std::vector result2(input.size();
  
// Do some calculations  
tbb::parallel_for(tbb::blocked_range(0, input.size()), [&](const blocked_range& r)
{
     for(int n = r.begin(); n != r.end(); ++n)
	      result[n] = do_some_calc(input[n]); // temporal store
}
, ap);   
  
 // Do some more calculations
tbb::parallel_for(tbb::blocked_range(0, input.size()), [&](const blocked_range& r)
{
     for(int n = r.begin(); n != r.end(); ++n)
	      result2[n] = do_some_calc(result[n]);
}
, ap);   

Basicly what I want is that second parallel_for will map its ranges in the same way as the first parallel_for in order to fully utilize the local caches.

www.casparcg.com
9 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Two parallel_for loops over identical ranges using a common affinity_partitioner instance will behave as expected even if the loop kernels differ, so any failure to improve performance would be attributable to other reasons that would also occur with identical loop kernels, such as not fitting in cache etc.

So I don't see a problem here, except that you might want to hoist the r.end() out of the loop to allow possibly dramatic optimisation:

for(size_t n = r.begin(), n_end = r.end(); n != n_end; ++n)

An example of using the same affinity_partitioner object in different consequent loops exists in our Seismic sample. See ParallelUpdateUniverse in universe.cpp.

>>Basicly what I want is that second parallel_for will map its ranges in the same way as the first parallel_for

Why not consider one parallel_for with two enclosed for loops? That will assure the same core completes each loop.

Jim Dempsey

www.quickthreadprogramming.com

"Why not consider one parallel_for with two enclosed for loops? That will assure the same core completes each loop."
The parallel_for and the serial for would be combined together. But can the loop kernels really be taken at face value here, or are they merely standing in for something that does involve a barrier...

Yes, it is a simplified sample. I have some stuff between the loops in my real code.

www.casparcg.com

That was my assumption...

Some feedback is always nice: how did the for loop header rewrite work out for you?

Hi rafn_end = r.end()Didn't do any differance, I think my compiler optimizes it to the same code. Atleast it seems that way from the dissasembly.

www.casparcg.com

That's nice, of course, but it doesn't always work out like that. Please have a real problem next time. :-)

Leave a Comment

Please sign in to add a comment. Not a member? Join today