Episode 4 of the hands-on workshop (HOW) series on parallel programming and optimization with Intel® architectures introduces thread parallelism and the parallel framework OpenMP*.
Discussed topics include:
- Using threads to utilize multiple processor cores
- Coordination of thread and data parallelism
- Using OpenMP to create threads and team them up to process loops and trees of tasks.
The OpenMP discussion describes controlling the number of threads, controlling variable sharing with clauses and scoping, loop scheduling modes, using mutexes to protect race conditions, and scalable approach to parallel reduction with thread-private variables.
The lecture part concludes with a discussion of the general approach to realizing opportunities for parallelism in computing applications.
The hands-on part demonstrates using OpenMP to parallelize serial computation and demonstrates for-loops, variable sharing, mutexes and parallel reduction on an example application performing numerical integration.