Crash Course on Multi-Threading with OpenMP*

Overview

Episode 4 of the hands-on workshop (HOW) series on parallel programming and optimization with Intel® architectures introduces thread parallelism and the parallel framework OpenMP*.

Discussed topics include:

  • Using threads to utilize multiple processor cores
  • Coordination of thread and data parallelism
  • Using OpenMP to create threads and team them up to process loops and trees of tasks.

The OpenMP discussion describes controlling the number of threads, controlling variable sharing with clauses and scoping, loop scheduling modes, using mutexes to protect race conditions, and scalable approach to parallel reduction with thread-private variables.

The lecture part concludes with a discussion of the general approach to realizing opportunities for parallelism in computing applications.

The hands-on part demonstrates using OpenMP to parallelize serial computation and demonstrates for-loops, variable sharing, mutexes and parallel reduction on an example application performing numerical integration.

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804