DAC, part 1: the NVidia tutorial, an ecumenical approach

This week's Design Automation Conference, in Anaheim, included a full-day tutorial called "Programming Massively Parallel Processors: The NVIDIA Exprience". Since my own talk was not until next day, I went ahead and enrolled.

As expected, this covered the CUDA programming model in detail -- handling host-device communication ("device" being the NVidia processor, G80 in this case), C syntax extensions, memory model, performance considerations, etc. Though the format was lecture-only (i.e., no labs), the day was worthwhile and informative. Two different lecturers tag-teamed most of the day, and a 3rd concluded with a series of case studies. The instructor choices were somewhat surprising...

The NVidia instructor was their chief scientist, David Kirk. I'd seen his name frequently in the context of CUDA university workshops, so was not surprised to see him at DAC. But the company's chief scientist? Wouldn't this be like Intel sending Justin Rattner to deliver, besides the keynote, a workshop on, say, Nehalem performance tuning? So, ok, this is at least a mild surprise. NVidia is taking this evangelism and training thing *very* seriously.

Another lecturer was Wen-mei Hwu, professor from Univ of Illinois. Hwu is a well-regarded figure in parallel computing, so it was fun to finally meet him, hear him speak. Hwu is also, come to think of it, Co-Director of Intel's "Universal Parallel Computing Research Center", UPCRC, at Illinois. Hmm, an Intel-sponsored Director instructing at an NVidia event? Given recent and very public barbs tossed each way from the two companies, this was at least a bit surprising.

The third speaker was Damir Jamsek, from IBM, with case studies on power grid analysis, and small circuit simulation (this was a "design automation" conference, after all). There was some detailed analysis on performance, including the impact of loop unrolling and memory management, as well as a peek at some issues in porting sparse linear solvers to this kind of platform. IBM is also, come to think of it, purveyor of a competing platform called Cell, so here was another surprise.

Ahead of this tutorial, I'd wondered how NVidia staff might react to an Intel student in their class, so in the morning break I introduced myself, pointing to the "Intel" on my conference badge. Kirk was both calm and cordial; no problem at all. Given that two of the three instructors held affiliations with competitors, I suppose there would be no difficulties over provenance of the students!

Though initially surprising, this approach -- sharing of parallel-computing insight among companies -- is profoundly right. All of us will survive to the extent that software developers adapt to the new concurrency universe, and it's in our collective interest that we bring about that adaptation as quickly as possible.
For more complete information about compiler optimizations, see our Optimization Notice.

Comments

aaron-tersteeg (Intel)'s picture

WOW! Thank you for sharing your insight into how the field of Parallel Programming hardware vendors are shaping up. It is refreshing to see everyone so eager to move the market forward. I hope that your talk went well.