Intel® Concurrent Collections for C++ 0.9 for Windows* and Linux*


Product Overview | FAQ | Documentation and Tutorials | Papers | Discussions and Feedback | Blog

Parallelism Without the Pain

Intel® Concurrent Collections for C++ simplifies parallelism and at the same time let's you exploit the full parallel potential of your application.

Easy parallelism

  • There is no need to think about lower level parallelization techniques like threading primitives or message passing; no need to understand pthreads, MPI, Windows threads, TBB,...
  • There is no need to think about different types of parallelism such as task, pipeline, fork-join, task or data parallelism.
  • Intel® Concurrent Collections for C++ provides a separation of concerns between what the application means and how to tune it for a specific platform. The application code can be paired with isolated tuning code. This allows programmers to focus on each separately.


CnC yields quasi-linear scaling in these example applications


CnC yields quasi-linear scaling on thousands of cores in RTM-3dfd


CnC makes tuning a separate ingredient
CnC makes tuning a separate ingredient




  • The same source runs on Windows and Linux.
  • The same binary runs on shared memory multi-core systems and clusters of workstations. In fact, Intel® Concurrent Collections for C++ is a unified model for shared and distributed memory systems (as opposed to the MPI / OpenMP combination, for example).


  • Because Intel® Concurrent Collections for C++ provides a way to express an algorithm with minimal scheduling constraints, it is very efficient
  • In addition, Intel® Concurrent Collections for C++ supports two types of tuning:
    • Runtime tuning makes the runtime more efficient for a specific application.
    • Application tuning makes the application itself more efficient with user-specified distribution of the work.


  • Intel® Concurrent Collections for C++ achieves scalable performance on a wide range of configurations from small multicore systems to large clusters.
  • No need to re-write or re-compile application in order to target a new configuration.

The following downloads are available under the What If Pre-Release License Agreement license.

Including required TBB bits
Choose one of these if in doubt
Linux* 64bit  
Windows* 64bit Windows* 32bit

Without TBB bits
Requires existing TBB >= 4.1 Update 1
Windows* 64bit Windows* 32bit

The Idea

Traditional approaches to parallelism let the programmer express the parallelism exlicitly. This makes achieving parallelism unnecessarily hard and ineffective. With Intel® Concurrent Collections for C++ the programmer does not think about what should go in parallel; instead he/she specifies the semantic dependencies of his algorithm and so defines the ordering constraints only: Concurrent Collections (CnC) lets the programmer define what cannot go in parallel. The model allows the programmer to specify high-level computational steps including inputs and outputs but he/she does not express when or where things should be executed. The when and where are handled by the runtime and/or an optional tuning layer. Code within the computational steps is written using standard serial constructs of the C++ language. Data is either local to a computational step or it is explicitly produced and consumed by them. An application in this programming model supports multiple styles of parallelism (e.g., data, task, pipeline parallel). While the interface between the computational steps and the runtime system remains unchanged, a wide range of runtime systems may target different architectures (e.g., shared memory, distributed) or support different scheduling methodologies (e.g., static or dynamic). With Intel® Concurrent Collections for C++ we provide a parallel runtime system for shared and distributed memory systems. Our goal in supporting a strict separation of concerns between the specification of the application and the optimization of its execution on a specific architecture is to help ease the transition to parallel architectures for programmers who are not parallelism experts. For excellent performance results which we were able to achieve with Intel® Concurrent Collections for C++ please read here.

What's new in version 0.9?

  • New step/thread affinity control: use step-tuner to assign affinity of steps to threads
  • Added thread-pinning: pin threads to CPU-cores
  • New cancellation feature for step-tuners: cancel individual steps (in flight or yet to come), all steps, or custom cancellation sets
  • Added tuning capabilites for CnC::parallel_for: switch on/off checking dependencies, priority, affinity, depends and preschedule
  • New support for Intel(R) Xeon Phi(TM) (MIC): native and mixed CPU/MIC
  • Improved instrumentation hooks for ITAC: Using collection names as given in collection-constructors
  • Cleaner/simpler hashing and equality definition for custom tags
  • New samples (UTS, NQueens (with cancellation), parsec/dedup, Floyd-Warshall) and improved samples
  • Closed memory leak on distributed memory
  • Other bug fixes etc.
  • Added support for Visual Studio* 2012, dropped support for Visual Studio* 2005 (Windows* only)
  • Require TBB 4.1 (was 4.0)
  • Switched to gcc 4.3 ABI (Linux* only)

See also the Release Notes.

Documentation and Tutorials

Papers, Presentations, Research

Discussions, Report Problems or Leave Feedback

To stay in touch with the Intel® Concurrent Collections team and the community, we provide a new email-list you can subscribe to or just watch online:

Aternatively, to report a problem or leave feedback on this product, please visit the "Whatif Alpha Forum" to participate in forum discussions about Intel® Concurrent Collections:

Nähere Informationen zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.