parallel_for updating progress bar hangs

parallel_for updating progress bar hangs

Hi ,

I have a few files which can be read independently. So I decided to use the parallel_for. During the file read, I update the progress bar [ only one progress bar which is updated for all the files ] and it hangs even after protecting the progress bar with mutex.

Following is my code:

tbb::task_scheduler_init tbbInit;

parallel_for(tbb::blocked_range<size_t>(0,Files.size()), ReadFunctor(mReadData) );

Where mReadData is vector which contains the file names.

 

ReadFunctor::operator()(const tbb::blocked_range<size_t>& range) const

{

  double progPerMeshRegion = 100.0 / range.size();

  for(size_t r = range.begin(); r != range.end(); ++r)

  {

    ReadData(mData[r],progPerMeshRegion);

  }

}

 

void ReadData( Database& meshData, double progress)

{

// I do file reading and process the data here

:

:

:

 ParallelProgressUpdate::HandleProgress(progress,ACHAR("Done reading data.."));

}

 

bool ParallelProgressUpdate::HandleProgress(const double currProgress,

                                            const AString& msg)

{

    tbb::mutex::scoped_lock lock(progMonMutex);

    progBar->SetMessage(msg);

    progBar->SetProgress((int)currProgress);  

  }  

  return true;

}

progBar is a progress bar which posts message in its SetMessage function call.

 

So in nutshell, I use parallel_for for reading multiple files. For each of them ReadData is called. At the end of ReadData, I call HandleProgress to update the progress and set the message in the progress bar.

 

I had 3 files, For the first one , the HandleProgress is completed successfully. For the second one, HandleProgress is called and SetMessage hangs and mutex remain locked. I could not find whats going on with SetMessage and the call is not returning back.

 

For the third one, HandleProgress is called and unable to progress as the mutex is in locked state.

 

 

 

 

 

2 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Leave a Comment

Please sign in to add a comment. Not a member? Join today