gdb backtrace - concurrent_vector issue or my code?

gdb backtrace - concurrent_vector issue or my code?

AJ's picture

Hey all,

I'm getting this really strange problem with my code... I have an std::map which maps doubles to concurrent_vector*'s.

I got this issue last night, and would appreciate if you have time to check what this backtrace means... it seems the problem is in the internal TBB code, but it could also be some issue in mine.

(gdb) backtrace
#0 0x0804ce2f in __TBB_machine_load_store::load_with_acquire (location=@0xc) at /usr/include/tbb/machine/linux_ia32.h:128
#1 0x0804ce4b in __TBB_machine_load_with_acquire (location=@0xc) at /usr/include/tbb/machine/linux_ia32.h:155
#2 0x0804ce5f in tbb::internal::atomic_impl::operator unsigned int (this=0xc) at /usr/include/tbb/atomic.h:226
#3 0x0804ce76 in tbb::concurrent_vector >::size (this=0x0) at /usr/include/tbb/concurrent_vector.h:597
#4 0x0804ce90 in tbb::concurrent_vector >::end (this=0x0) at /usr/include/tbb/concurrent_vector.h:626
#5 0x0805b381 in MasterScheduler::run (this=0x8062ee0) at master_scheduler.h:272
#6 0x0804b0c6 in main (argc=2, argv=0xbf9d2a04) at driver.cpp:138

I'm getting segfaults btw.

AJ

5 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.
Anton Malakhov (Intel)'s picture

Look, you have the call to tbb::concurrent_vector<...>::end() method with this=0x0. You should check your reference or pointer which is used to call this.

AJ's picture

Ah... I didn't know how to use gdb, I didn't know to look at this=0x0. I have to learn gdb in more detail, this is my first real adventure into using it.

I still don't understand why this is happening. Here is another backtrace and the code surrounding it. I'm using tbb::concurrent_vector<...>::range

#0 0x0804ce95 in __TBB_machine_load_store::load_with_acquire (location=@0xc) at /usr/include/tbb/machine/linux_ia32.h:128
#1 0x0804ceb1 in __TBB_machine_load_with_acquire (location=@0xc) at /usr/include/tbb/machine/linux_ia32.h:155
#2 0x0804cec5 in tbb::internal::atomic_impl::operator unsigned int (this=0xc) at /usr/include/tbb/atomic.h:226
#3 0x0804cedc in tbb::concurrent_vector >::size (this=0x0) at /usr/include/tbb/concurrent_vector.h:597
#4 0x0804cef6 in tbb::concurrent_vector >::end (this=0x0) at /usr/include/tbb/concurrent_vector.h:626
#5 0x0804cf33 in tbb::concurrent_vector >::range (this=0x0, grainsize=1)
at /usr/include/tbb/concurrent_vector.h:586
#6 0x0805b46b in MasterScheduler::run (this=0x8062f80) at master_scheduler.h:270
#7 0x0804b0c6 in main (argc=2, argv=0xbf9751a4) at driver.cpp:138

This is master_scheduler.h:270:
tbb::parallel_for(timeEventsVector->range(), runner, affinity_partitioner);

And here is the def'n of runner object:
class ParallelExecutionObjectRunner
{
public:
ParallelExecutionObjectRunner() { }

ParallelExecutionObjectRunner(const ParallelExecutionObjectRunner& x)
{
}

~ParallelExecutionObjectRunner()
{
}

void operator()(tbb::concurrent_vector::range_type& range) const
{
for(tbb::concurrent_vector::range_type::iterator i = range.begin(); i != range.end(); ++i)
{
(*i)->execute();
}
}

};

robert-reed (Intel)'s picture

Here's an exercise for your effort to learn gdb. Can you breakpoint on the parallel_for call at master_scheduler.h:270 and look at timeEventsVector? Stack trace element 5 in the listing above suggests that the call to concurrent_vector::range is getting an undefined "this", which looks like it should be the timeEventVector object. That seems the first visible point of failure in this traceback; from there, the undefined elements propagate through the stack, affecting the start() and end() calls.

AJ's picture

Got it... it took a long time to find out what was wrong... but basically I did a map<...>::upper_bound(0), but that didn't get me what was *at* 0... so it messed up... changing that to map<...>::upper_bound(-1) fixed it... took a while with gdb because I couldn't see the problem. Got it now, and learned gdb.

Thanks for the help... I kept looking at the code and seeing it was correct...

Login to leave a comment.