Introduction to Parallel Programming video lecture series – Part 03 “Finding Parallelism”

Introduction to Parallel Programming video lecture series – Part 03 “Finding Parallelism”

The lecture given here is the third part in the “Introduction to Parallel Programming” video series.  This part endeavors to explain to the viewer how dependence graphs can be used to identify opportunities for parallelism in code segments and applications. Examples of how to decide whether a domain or task decomposition will work best are offered. Code that cannot be parallelized is also able to be identified through dependence graphs. The lecture finishes with some computation examples and generalizations about problems that are more amenable to parallel solution versus problems less amenable to parallel solution as examples that viewers could apply to their own situations. 

Running time: 12:24  

Note: The material presented in this lecture series has been taken from the Intel Software College multi-day seminar, “Introduction to Parallel Programming”, authored by Michael J. Quinn (Seattle University). The content has been reorganized and updated for the lectures in this series.

For more complete information about compiler optimizations, see our Optimization Notice.