Download Link - MP3 Audio File (Small):
First the News:
Threading Challenge Phase 2 Problem 1 began on Monday, August 9, and is due 2010 August 30, 2010 at 12:00 PM (Pacific Daylight Time).
Get ready to attend the CT Tutorial to be given by Intel’s Dr. Michael Klemm, at the Nineteenth International Conference on Parallel Architectures and Compilation Techniques (PACT) September 11-15, 2010 in Vienna Austria. Registration for the conference is well underway and you can even register the week of the conference.
Superscalar Programming 101 (Matrix). For this 5-part article, ISN Black Belt Jim Dempsey takes a small, well-known algorithm, shows a common approach to parallelizing that algorithm, follows with a better one and lastly, produces a fully cache-sensitized approach. Readers of this article will learn a methodology for interpreting test run statistics and to improve their code using those interpretations.
Intel Developer Forum Returns to San Francisco, Sept. 13-15
SC10 - The Super Computing Conference is being held November 13-19, 2010 in New Orleans.
If you have questions you'd like to see up discuss, ideas for show topics or just want to send fan mail.... Send Email to email@example.com
On Today's Show:
Dr. Alexandra Fedorova, Assistant Professor, School of Computing Science, Simon Fraser University
Dr. Fedorova leads the Systems Research Group at SFU, which is a part of the SyNAr (Systems, Networking and Architecture) lab that she co-founded. Her research focuses on system design for multicore processors and on parallel computing.
On the show Dr. Fedorova discussed her paper "Managing Contention for Shared Resources on Multicore Processors" that she co authored with Sergey Blagodurov, Sergey Zhuravlev; Simon Fraser University. The paper presents research showing that contention for caches, memory controllers, and interconnects can be alleviated by contention-aware scheduling algorithms.
From the paper, "Modern multicore systems are designed to allow clusters of cores to share various hardware structures, such as LLCs (last-level caches; for example, L2 or L3), memory controllers, and interconnects, as well as prefetching hardware. We refer to these resource-sharing clusters as memory domains, because the shared resources mostly have to do with the memory hierarchy. Figure 1 provides an illustration of a system with two memory domains and two cores per domain.
Threads running on cores in the same memory domain may compete for the shared resources, and this contention can significantly degrade their performance relative to what they could achieve running in a contention-free environment. Consider an example demonstrating how contention for shared resources can affect application performance. In this example, four applications-Soplex, Sphinx, Gamess, and Namd, from the SPEC (Standard Performance Evaluation Corporation) CPU 2006 benchmark suite6-run simultaneously on an Intel Quad-Core Xeon system similar to the one depicted in figure 1.
As a test, we ran this group of applications several times, on three different schedules, each time with two different pairings sharing a memory domain. The three pairing permutations afforded each application an opportunity to run with each of the other three applications within the same memory domain:
1. Soplex and Sphinx ran in a memory domain, while Gamess and Namd shared another memory domain.
2. Sphinx was paired with Gamess, while Soplex shared a domain with Namd.
3. Sphinx was paired with Namd, while Soplex ran in the same domain with Gamess.
Figure 2 contrasts the best performance of each application with its worst performance. The performance levels are indicated in terms of the percentage of degradation from solo execution time (when the application ran alone on the system), meaning that the lower the numbers, the better the performance."
Dr. Fedorova was awarded a Google research award to address the issues of power consumption and efficiency in data centers. Data centers are becoming an increasingly important part of the world-wide computing infrastructure. They offer a promise of scalability, reliability and manageability for the ever-more prevalent online services. The problem is that data centers consume inordinate amounts of power: over 7.2 billion dollars is spent every year on energy consumption in US data centers alone.
Dr. Fedorova and her students have been designing innovative solutions to improve energy efficiency of virtual machine technology (a key technology for data centers) and to enable the software to use modern multicore hardware more effectively. The research award from Google is a way of recognizing the importance of this work.
Coming up next on Parallel Programming Talk...
Introducing Kathy Farrel the new Parallel Programming Community Manager
Date/Time: 8/24/2010 at 8:00 AM Pacific - Watch Live on Intel Software Network TV
Set your alarm to watch Parallel Programming Talk every other Tuesday at 8:00AM PT.