Parallel Languages Workshop at UIUC - 11 FEB 2009

On 11 FEB 2009, the Universal Parallel Computing Research Center (UPCRC) at the University of Illinois held a Languages Workshop. The goal of this day-long conference was to present and discuss "the most important issues today in parallel programming language design."  The format of the conference was a moderated panel discussion for each of four topics. Panel members gave some opening thoughts on the questions the panel moderator had given them, and the rest of the time was taken by attendees questions. I attended the conference and I've put together some of the highlights and interesting questions, IMO, that came out of the different panels.

Determinism or Non-Determinism?  The first question that was posed (and ran through the rest of the day) was "What does it mean to be deterministic?" The simplest definition that almost everyone could accept seemed to be that a program is deterministic if the same inputs yield the same observable results. Non-determinism has it's place (reactive codes, transaction processing), but panel members were divided on how to handle this in parallel programming. Arguments ranged from enforcing deterministic coding with programmers explicitly requesting non-determinism to better isolation of non-deterministic execution to requiring the language/compiler/runtime to handle non-determinism.

The Role of the Compiler in Language Design.The biggest question that the panel addressed was how much the compiler should be considered when designing a new (parallel) programming language. One panelist introduced two levels of programmers: 1) Efficiency Programmers (10% of all programmers) who are parallel programming experts and do low-level coding and implement libraries; and 2) Productivity Programmers (the other 90%) who are domain experts and users of programming languages to get their work done. The latter group needs sophisticated compiler support. Another panelist stated that programming languages should not take anything from a compiler and current parallel programming methods have failed because of the restrictions these methods place on programmability.

Functional vs. Imperative Languages. Looking back over my notes I see three of the four panelists had a list of "Benefits" of functional languages in a parallel programming situation. Some of the benefits pointed out were locality, determinism, and a trivial memory model. So with everyone in agreement that functional languages should be the de facto parallel programming method, the only question that didn't get full agreement was whether the parallelism should be implicit or explicit.  From my perspective, the question seemed to come down to how much you wanted to tarnish the purity of the functional language. There were cases for both sides, but plenty of parallelism appeared to need programmer intervention, if only for control of things like efficient data reuse and SIMD computations.

Hardware Support. The last panel of the day looked into the questions of memory management in hardware vs. software. There were several audience members that claimed the memory model for concurrent executions was not defined, especially in the face of data races. Transactional Memory ™ was one idea that was touched upon by three of the panelists. For synchronization, one panelist thought the current atomics were sufficient and Software TM might be useful (but supported the wrong model). The possibility of heterogeneous cores was brought up. To be successful, this will require a uniform memory model and programming language support to make the heterogeneity of the cores transparent.

Each of the panels generated some spirited debates.  The one thing that I think all participants might agree upon is that there needs to be a lot more study, more specification, and more standardization before many of these question will be able to be solved once and for all.

UPDATE (25 FEB 09): Some slide presentations and video of the panels is available at http://www.upcrc.illinois.edu/workshops/summit_feb2009/language.html.
Para obtener más información sobre las optimizaciones del compilador, consulte el aviso sobre la optimización.