Introduction to Parallel Programming for Shared Memory Parallelism (Intel)

This two-day course introduces concepts and approaches common to all implementations of parallel programming for shared-memory systems. Starting with foundation principles, topics include recognizing parallelism opportunities, dealing with sequential constructs, using threads to implement data and functional parallelism, discovering dependencies and ensuring mutual exclusion, analyzing and improving threaded performance, and choosing an appropriate threading model for implementation. The course uses presentations, walk-through labs, and hands-on lab exercises. While lab exercises are done in C using OpenMP*, the concepts apply broadly to any specific threading model.

This course is developed in collaboration with Prof Michael Quinn, Oregon State University. Prof Quinn is the author of 7 books, including Parallel Programming in C with MPI and OpenMP*, published by McGraw-Hill in June 2003.

Course Objectives

After completing this course, you should be able to:

  • Recognize opportunities for concurrency
  • Use basic implementations for domain and task parallelism
  • Address matters concerning threading correctness and performance

Course Agenda

  • Recognizing parallelism
  • Shared memory and threads
  • Implementing domain decompositions
  • Confronting race conditions
  • Implementing task decompositions
  • Analyzing parallel performance
  • Improving parallel performance
  • Choosing an appropriate thread model

Day 1 Agenda:

  • 0900 - Introductions
  • 0930 - Recognizing Potential Parallelism
  • 1100 - Shared-Memory Model and Threads
  • 1200 - Lunch
  • 1300 - Implementing Domain Decompositions
  • 1500 - Confronting Race Conditions

Day 2 Agenda:

  • 0900 - Implementing Task Decompositions
  • 1100 - Analyzing Parallel Performance
  • 1200 - Lunch
  • 1300 - Improving Parallel Performance
  • 1500 - Choosing the Appropriate Thread Model
There are downloads available under the Creative Commons License license. Download Now

Include in RSS: 

For more complete information about compiler optimizations, see our Optimization Notice.