I'm excited by our announcement today of Intel® Parallel Computing Centers. The first five centers will be located at CINECA, Purdue University, Texas Advanced Computing Center at the University of Texas (TACC), The University of Tennessee, and Zuse Institut Berlin (ZIB). There are still opportunities to propose becoming a center at software.intel.com/en-us/articles/intel-parallel-computing-centers. The centers represent investments that I think of as "digging into code" to help make real applications more prepared to use parallel computing.
Parallel computing challenges are about enabling the future of computing not just tuning for one hardware direction or another. That's the challenge that these centers are taking on.
I frequently hear from other programmers concerns that start with words like these:
- "My algorithm cannot be done in parallel."
- "My program cannot scale beyond 20 cores."
- "I can't make my code vectorize."
The punch line usually is: Can you tell me why?
Every time it happens, I want to dig in myself to work on the program... so much to do, and so little time!
It's a fun challenge: How can we structure a problem (algorithm) so that it solves a problem while using the power of parallel computing? Sometimes today's algorithms can move to parallelism well with evolutionary changes. In other cases, previously designed algorithms can prove ill-suited for exploiting parallel computing. Regardless, the opportunities for revolutionary approaches are there, but beg us for inspiration.
Intel Parallel Computing Centers will help find both. I relish the debates and discoveries these centers will help create. We will all benefit. In every sense of the word, this is an effort to help "modernize" applications.
At Intel, we've invested heavily in a vision for parallel computing which is being called neo-heterogeneous computing. Our mission is to deliver the benefits "heterogeneous" without the downsides. With the Intel Many Integrated Core (MIC) Architecture, we offer our Intel Xeon Phi coprocessors to support the same familiar programming languages, models and tools for highly parallel computing as we are already familiar with for parallel computing in general. This leads to "neo-heterogeneous" clusters made of "heterogeneous" clusters with "homogeneous" programming. This approach is extraordainrily valuable.
The need for neo-heterogeneous computing is enormous. It combines the promise of heterogeneous to make deliver better compute density, compute performance and lower power consumption, while including the benefits of neo-heterogeneous computing to maintain programming flexibility, performance and efficiency for developers.
As I mentioned previously, first five centers will be located at CINECA, Purdue University, Texas Advanced Computing Center at the University of Texas (TACC), The University of Tennessee, and Zuse Institut Berlin (ZIB). Raj's posted more information about each center in his blog. The one I'm that I've been working closely with recently is the very promising Memory Access Centric Performance Optimization (MACPO) research for TACC's PerfExpert project (read various papers). I'm excited to see what results we can get in the upcoming year from this fine work.
We are encouraging proposals for more centers from others who relish this applying parallel programming skills to existing codes and moving them into a parallel world. More information on the program can be found by visiting software.intel.com/en-us/articles/intel-parallel-computing-centers. The results will help break new ground, which will provide valuable lessons for us all, while yielding practical benefits in important open source applications.