Parallel Programming Implementations
- A high-level parallel framework likeIntel® oneAPI Threading Building Blocks (oneTBB)or OpenMP*. Of these parallel frameworks for native code,oneTBBsupports C++ programs and OpenMP supports C, C++, or Fortran programs. For managed code on Windows* OS such as C#, use the Microsoft Task Parallel Library* (TPL).C# and .NET support is deprecated startingIntel® Advisor2021.1.
- A low-level threading API like Windows* threads or POSIX* threads. In this case, you directly create and control threads at a low level. These implementations may not be as portable as high-level frameworks.
- : You do not have to code all the detailed operations required by the threading APIs. For example, the OpenMP*Simplicity#pragma omp parallel for(or Fortran!$OMP PARALLEL DO) and theoneTBBparallel_for()are designed to make it easy to parallelize a loop (see Reinders Ch. 3). With frameworks, you reason about tasks and the work to be done; with threads, you also need to decide how each thread will do its work.
- : The frameworks select the best number of threads to use for the available cores, and efficiently assign the tasks to the threads. This makes use of all the cores available on the current system.Scalability
- :Loop ScalabilityoneTBBand OpenMP assign contiguousof loop iterations to existing threads, amortizing the threading overhead across multiple iterations (seechunksoneTBBgrain size: Reinders Ch. 3).
- :Automatic Load BalancingoneTBBand OpenMP have features for automatically adjusting the grain size to spread work amongst the cores. In addition, when the loop iterations or parallel tasks do uneven amounts of work, theoneTBBscheduler will dynamically reschedule the work to avoid idle cores.
Available High-Level Parallel Frameworks
Intel® oneAPI Threading Building Blocks (oneTBB)
Microsoft Task Parallel Library* (Windows* OS only)