The lecture given here is the ninth part in the “Introduction to Parallel Programming” video series. This part describes how design and implement a task decomposition solution. An illustrative example for solving the 8 Queens problem is used. Multiple approaches are presented with the pros and cons for each described. After the approach is decided upon, code modifications using OpenMP are presented. Potential data race errors with a shared stack data structure holding board configurations (the tasks to be processed) are offered and a solution is found and implemented.
The lecture given here is the eighth part in the “Introduction to Parallel Programming” video series. This part describes how the OpenMP task pragma works and how it is different from the previous worksharing pragmas. A small linked list processing code example is used to illustrate how independent operation within a while-loop can be parallelized. Since recursive functions, where the recursive calls are independent, can be executed in parallel, the OpenMP task construct is used to parallelize the computation of a desired member from the Fibonacci sequence.
The lecture given here is the fourth part in the “Introduction to Parallel Programming” video series. This part provides the viewer with a description of the shared-memory model of parallel programming. Implementation strategies for domain decomposition and task decomposition problems using threads within a shared memory execution environment are illustrated. Simple code examples further support threaded implementations of parallel algorithms, especially with regards to deciding when variables should be shared and when variables must be made private to threads for correctness.
The lecture given here is the third part in the “Introduction to Parallel Programming” video series. This part endeavors to explain to the viewer how dependence graphs can be used to identify opportunities for parallelism in code segments and applications. Examples of how to decide whether a domain or task decomposition will work best are offered. Code that cannot be parallelized is also able to be identified through dependence graphs.
The lecture given here is the second part in the “Introduction to Parallel Programming” video series. This part endeavors to give the viewer strategies that can identify opportunities for parallelism in code segments and applications. Three methods for dividing computation into independent work (Domain Decomposition, Task Decomposition, and Pipelining) are illustrated. The first two methods will be examined in later parts of the series and lab exercises.
Running time: 8:47
This hands-on exercise lab, Quicksort, is a programming lab associated with the video lecture “Implementing a Task Decomposition” (Part 9) from the “Introduction to Parallel Programming” series. This problem seeks to parallelize the recursive implementation of the Quicksort algorithm with a task decomposition solution. The lab contents include source files and written instructions to guide the programmer in converting the serial source code into an equivalent parallel version using OpenMP.
Since the kickoff of the High School Parallelism bootcamp this summer, I've received several requests for a write up of the five role playing activities we used.
The activities put students in the place of procesor cores and had them perform tasks in parallel. These activities proved to be popular among many of the students at the camp, however, some of the more advanced students did express that they felt the exercsies could seem childish.