Intel® Threading Building Blocks

Join the Intel® Parallel Studio XE 2018 Beta program

We would like to invite you to participate in the Intel® Parallel Studio XE 2018 Beta program. In this beta test, you will gain early access to new features and analysis techniques. Try them out, tell us what you love and what to improve, so we can make our products better for you. 

Registration is easy. Complete the pre-beta survey, register, and download the beta software:

Intel® Parallel Studio XE 2018 Pre-Beta survey

Quick Poll/Survey on Intel® TBB

Dear Intel® TBB user,

I would love to hear your candid thoughts about the following question on TBB.

On a scale of 1-5 how likely is it that you would recommend the Intel® TBB to a peer or colleague with 5 being most likely to recommend?

Please leave your responses below and don’t forget to provide a reason for your score be it high or low.

Thanks

Sharmila

Product Manager

 

 

 

Cancelling/Stopping/Killing a Running Task

Hi,

I'm a newbie when it comes to utilizing IntelTBB.

I am working on a Windows MFC application that is displaying interactive dashboards so user responsiveness is critical.
The data is complex not so much in terms of calculation (although some is involved) but in terms of the amount of
data that needs to be processed.  Each part of the dashboard can take several minutes to fully complete so information
is sent via boost::signals2 to the GUI level as it becomes available so that the screen can refresh and "appears" fast.

Should key be preserved after concurrent hash map insert() ?

A quick question:

When using tbb concurrent hash map, does memory used by key during insert() requires to be preserved? I guess so because tbb may require it for equal() but like to get definite answer. For example, 

void add(const char *string, uint32_t type, void *data)  {

               MyKey key;

               key.string = string;
               key.type = type;
               map->insert(key, data);

   }

Concurrent LRU cache

Once an item is accessed in the LRU cache is there a way to make it invalid for access by subsequent requests? The use case is like:

1. I want to access an element from the LRU cache.

2. Use the accessed element and while in use this should not be accessible to other threads.

3. Return the used item to the LRU cache which makes the item available for future requests.

Currently I am not sure how to do Step 2. It is not necessary that I use a LRU cache data structure, maybe there is some other data structure which can help me do this. Any help is appreciated.

parallel_for and thread_monitor::launch: _beginthreadex failed

Hi I've just recently been having trouble running parallel_for loops. I'm using TBB as part of Parallel Studio XE 2017 Cluster Edition. With no changes to my code, I have been receiving the error message thread_monitor::launch: _beginthreadex failed whenever my program reaches my parallel_for loop and crashes my program. This recently has also been happening to my other projects too. My project involves capturing images with multiple cameras with each camera running in parallel with each other.

What is : Unexpected exception or cancellation data in the masters dummy task

When I get this message as part of an assert / abort ...

"Unexpected exception or cancellation data in the masters dummy task"

What is a likely cause ?  What tricks might I used to try to diagnose this further and maybe find the 

root cause ? 

It sounds like a master task has been unreasonable about allowing a task to finish gracefully,

but that's just a guess on my part.

Any hints or elaborations or guesses would be appreciated.

Subscribe to Intel® Threading Building Blocks