Intel® Parallel Computing Center at the Department of Computer and Information Science, University of Oregon

Principal Investigator:

Allen D. Malony
Professor, Department of Computer and Information Science at the University of Oregon (UO)

Allen D. Malony is a Professor in the Department of Computer and Information Science at the University of Oregon (UO) where he directs parallel computing research and development projects, notably the TAU parallel performance system project. He has extensive experience in performance benchmarking and characterization of high-performance computing systems, and has developed performance evaluation tools for a range of parallel machines during the last nineteen years. Malony was awarded the NSF National Young Investigator award, was a Fulbright Research Scholar to The Netherlands and Austria, and received the Alexander von Humboldt Research Award for Senior U.S. Scientists.

 

Co-Principal Investigators:

Hank Childs is an Assistant Professor in the Computer and Information Science Department at the University of Oregon. His research focuses on scientific visualization, high performance computing, and the intersection of the two. Outside of his research, Hank is best known as the architect of the VisIt project, a visualization application for very large data that is used around the world. Hank received the Department of Energy Early Career Award in July of 2012 to research visualization with exascale computers.
 
 

 

Boyana Norris is an Assistant Professor in the Department of Computer and Information Science at the University of Oregon (UO) where she works on high-performance computing (HPC) methodologies and tools for performance reasoning and automated optimization of scientific applications, while ensuring continued or better usability of HPC tools and libraries and improving developer productivity. She has coauthored over 70 peer-reviewed publications on topics including performance modeling, automated performance optimization (autotuning) of parallel scientific applications, embedding of domain-specific languages into legacy codes, source-transformation-based automatic differentiation, adaptive algorithms for HPC, component-based software engineering for HPC, and taxonomy-based approaches to learning and using HPC libraries.

 

Description:

Parallel computing is a broad field of computer science concerned with the architecture, HW/SW systems, languages, programming paradigms, algorithms, and theoretical models that make it possible to compute in parallel. Parallel computing is also an old field. Charles Babbage once said of the Difference Engine design, “The most constant difficulty in contriving the engine has arisen from the desire to reduce the time in which the calculations were executed to the shortest which is possible.” He explored parallelism to address this. Indeed, performance is parallelism's raison d'etre, and parallelism continues to be the path to performance in modern day computing.Large-scale parallelism (>100000 processors) lies at the heart of the fastest, most powerful computing systems in the world today.

In modern day computer systems, multicore technology is everywhere. This means that parallelism is ubiquitous. Small-scale, low-end parallelism is the driving force behind affordable scientific computing and the growing success of computational science. With the advent of computational accelerators, things are getting a lot more interesting. Parallelism is also in cell phones, PDA, tables, and laptops. Parallel computing is everywhere!

It is important that we train the next-generation of computer science students in the fundamental elements of parallel computing. The Department of Computer and Information Science (CIS) at the University of Oregon (UO) is investing in parallel computing as one of three target areas for the future. There is now a critical mass of CIS professors who are keen on broadening the parallel computing academic offerings to a richer academic program of instruction. The Intel® Parallel Computing Center (Intel® PCC) at the UO has as its goal the creation of a high-quality undergraduate parallel computing course where students study a broad set of topics in parallel computing to build a foundation for a parallel skills set that they could take forward in their computing careers. Meeting this goal will establish a foundation for coordinated curriculum development in the future.

The Intel® PCC has been instrumental in the development of a new “Parallel Computing” undergraduate course that was offered for the first time in the UO Spring term 2014. In addition to the class lectures, students gained valuable experience learning parallel programming in a companion lab that the Intel® PCC helped to fund. The picture below shows the programming lab built from Intel® NUC (Next Unit of Computing) systems, configured to use the Intel parallel program development environment. In this Intel® PCC lab, students gained experience with Intel Cilk™ Plus, Intel® Thread Building Blocks (Intel® TBB), OpenMP*, and MPI.

The CIS 410/510 course used the textbook “Structured Parallel Programming” written by Intel authors Michael McCool, Arch D. Robison, and James Reinders. The students all felt that the book was excellent in its presentation of parallel programming concepts and patterns. Examples given of the patterns in Intel Cilk™ Plus, Intel® TBB, and OpenMP provided comparison of how patterns are realized in different programming systems.

Through the Intel® PCC, Intel Corporation has played a positive role in supporting the CIS academic parallel computing initiative. The Parallel Computing course will be proposed to UO as a permanent course (CIS 431/531) in the CIS curriculum this coming year.

Related Website:

CIS 410/510: Parallel Programming: http://www.cs.uoregon.edu/Classes/14S/cis410parallel

For more complete information about compiler optimizations, see our Optimization Notice.