Got Hardware? Exploring barriers and breakers to teaching parallelism

SIGCSE 11 seemed like a good place to challenge some of the assumptions that we make all year round.  Assumptions, and dare I say, excuses, for why the integration of parallelism into undergraduate classrooms seems to be progressing so slowly.  When we develop and decide what programs to fund, Intel wants to know what sort of impact we're making to the ecosystem and we naturally are looking for the pain points.

The Educational Exchange was introduced to provide ready-to-use materials to shorten professors course development cycle; Classroom tools grants provide the software products to introduce students to these difficult concepts; and most recently the Intel® Manycore Testing Lab provides hands-on access to a next-generation development environment.  All this is great and the profs that use it, love it.  But why aren't all of our members using these free resources?  Is access to the hardware, courseware, and tools, not the core issue we're dealing with when it comes to changing the way we educate the next generation of technologists? If it’s not access, are there other key points we should be addressing in the Intel Academic Community?

I was fortunate enough to have three of our Intel Academic Black Belts at SIGCSE to make up a panel for a Bird-of-a-Feather on Access to Hardware. Together they thrilled the group in the room with a stunning array of resources available for teaching parallelism, and I came to the conclusion that while resources are indeed finite, awareness of where to find them might be one of the issues holding professors back. 

Professor Tom Murphy talked about Little Fe and described the low cost portable cluster as a useful option for a small college without access to a cluster.  Also as a way to get students "hooked" on clusters or as a gateway cluster to larger clusters. "It's one way to have a Linux distribution without a sys-admin.  You can do REAL science with it"  

Dr. Dick Brown gave an overview of his experience using the Intel® Manycore Testing Lab to give his students hands-on experience of computational concepts. "They can experience datarace conditions themselves, even at the lowest levels.  Students who haven’t ever done C++ before."  He discussed how remote access to this 32-core, 64 thread development environment provides experience with things that the campus can’t really afford.

Dr. Matt Wolf went over the advantages of using the cloud. He suggested "virtualizing" things that Amazon web services wont’ let you do.  "Virtualization makes sense in crash and burn classes."  Deploying a private educational cloud with VM will allow the students to reinitialize and start over.

Finally, Aaron Weeden, a recent graduate of Earlham College and contributor to the Little Fe project spoke from a student's perspective. He talked about how important it is to show students how they can get involved right away. "Give them an easy way to see parallelism in action and understand why it’s such an important concept. Everywhere I go there are students interested to see an actual cluster working."

Other options that came up in the conversation included Supercomputing Centers across the US, eg. Many people don't know that Blue Waters has 1% of its cycles available to education. Tera grid gives out a lot of hours, Penguin Computing, IBM, HPC on demand are other places one can look for cluster instances.

The conclusion?  Yes, there is a problem with getting resources in the typical CS undergrad classroom, however MAYBE greater awareness of the resources available could make a significant difference in adoption. So, I guess my only excuse now is coming up with creative ways to get the word out. 

Thoughts on this subject? Or suggestions of other places where professors can find resources?  Let us know!