Developing for Terascale-on-a-Chip : 3 of 3


"Taking the Next Steps"

"Developing for Terascale-on-a-Chip" is a series of three articles on looking at the future of programming for many-core parallel computers that run in the teraflop and above range - terascale. This is the third article in the series.

Tera-scale research is moving fast at Intel. To help support the future of terascale-on-a-chip computing, we’re working on the building blocks that will enable developers to create some very interesting codes and applications. Yet, the programming community is very large, and much has to be done in it to support widespread terascale-on-a-chip application development. Moving forward, we hope to see the following that will help catalyze parallel programming in many more industries:

A standard hierarchy of models that bridges the domains between the hardware and the problem at hand.

A community-accepted design pattern language defining standard practices in parallel algorithm design.

A language of programmability – how we describe human interaction with a programming language. (e.g.

Standard programmability benchmarks. (e.g.

Moving from today’s multi-core to the many-core of terascale-on-a-chip will be an evolution over several years. But, developers should expect it to happen quickly. Programmers who begin preparing today will be the ones ready to jump in when terascale-on-a-chip systems become available. Places to go for more information include the following:

The online Intel® Developer Zone Multi-core Learning Guide offers several articles, tutorials, and blogs on programming for multi-core and many-core. This is an excellent resource for learning about thread-level parallelism, the concepts, and the tools that help programmers create efficient, threaded code.

Visit the Intel® Tera-scale Computing web site to keep current on what Intel is doing with its terascale-on-a-chip research. This site will keep developers informed of the latest technologies for both hardware and software, such as software transactional memory, Ct, and speculative threading. The research blog site is another place to go for information from Intel researchers on terascale.

There are many printed and web sources for developing highly concurrent code. Many of the national laboratories around the country (e.g. Argonne National Laboratory,; Pacific Northwest National Laboratory,; and Lawrence Livermore National Laboratory, host information on parallel codes and research being done on their systems.

The Parallel Programming Laboratory at the Univers ity of Illinois Urbana-Champaign offers a wealth of information on parallel programming, including algorithms, libraries, and tools. Other universities host similar information.

For current advancements on programming models, APIs, and tools, visit the specific web sites, such as (OpenMP), (MPI), the Intel® Threading Analysis Tools site, and others.

Attend and keep informed of events and discussions at the Intel Developer Forum.

For more general discussions of parallel programming, the Super Computing conference is the standard conference to attend (e.g. SC05, SC07, etc.). The majority of the key players in parallel computing attend this meeting each year. (

The years ahead hold some very exciting possibilities with terascale-on-a-chip computing. It will not only enable applications we’ve dreamed about for years or decades-or have yet to invent-it will advance existing work already done by large supercomputers in industry and science.

These articles have only been an introduction to a vision of terascale-on-a-chip computing. Hopefully, they’ve created an appetite for more information to begin preparing developers for an amazing future.

Series Links

About the Author

Ken Strandberg writes technical articles, white papers, seminars, web-based training, and technical marketing and interactive collateral for emerging technology companies, Fortune 100 enterprises, and multi-national corporations. Mr. Strandberg’s technology areas include Software, Industrial Technologies, Design Automation, Networking, Medical Technologies, Semiconductor, and Telecom. His technology background enables him to write from an engineering perspective for technical audiences. Ken’s work has appeared in national engineering magazines, such as EE Times and EE Design, and on Fortune 100 enterprise web sites. Mr. Strandberg lives in Nevada and can be reached at



For more complete information about compiler optimizations, see our Optimization Notice.