Intel® Parallel Computing Center at Purdue University


      Gerhard Klimeck

Principal Investigators:

Gerry McCartney
University CIO, Vice President for Information Technology and Purdue's Olga Oesterle England Professor of Information Technology. He is also Inaugural Director of Purdue's Innovation and Commercialization Center.

Gerhard Klimeck
Director of the Network for Computational Nanotechnology, Professor of Electrical and Computer Engineering

Description:

The Intel® Parallel Computing Center at Purdue University is focused on improving the computing experience of researchers who use the latest high performance computing machines, which incorporate Intel's Xeon Phi™ coprocessors. Much of the work will be done by Purdue students through the Purdue Pathmaker program, which enables students to have engineering jobs while they are still on campus.

Gerry McCartney, Purdue's CIO, vice president for information technology, Olga Oesterle England Professor of Information Technology, who leads the Purdue Pathmaker program, says this is a rich opportunity for the students. "Through their work at the Intel Parallel Computing Center, our students are working on some of the world's most advanced technologies in the Xeon Phi coprocessor, the NEMO simulation software, and nanoelectronics," McCartney says. "This will jumpstart their engineering careers, and it wouldn't surprise me to hear that many of them decide that they want to begin their careers at Intel."

Purdue's Conte supercomputer, which is the fastest campus supercomputer in the nation, makes use of Xeon Phi coprocessors. Conte, which was built in 2013, clocks in with a sustained, measured maximum speed of 943.38 teraflops and a peak performance of 1.342 petaflops.

Phase I of the Intel Parallel Computing Center has focused on optimizing the performance of the NEMO scientific simulation suite of software tools to run on Intel's Xeon Phi parallel computing coprocessors.

Phase II’s focus will be to fully incoporate these optimizations into standard NEMO5 device simulations and to disseminate this capabiltity to Intel design engineers and to the larger nanoelectronics community through nanoHUB tools. A continuation of work begun in Phase I will also seek to allow NEMO5 to efficiently scale on leadership class machines equipped Intel Phi coprecessors.

The NanoElectronics MOdeling tool, NEMO, is used in nanoelectronics as they try to better understand how electrons flow through nano-scale devices, such as next-generation transistors. The work is led by Gerhard Klimeck, director of the federally funded Network for Computational Nanotechnology and Professor of Electrical and Computer Engineering at Purdue University.

NEMO computes electron flow and electronic structure for devices at the end semiconductor device roadmap as predicted by Moore's Law, and currently is in is fifth edition. NEMO tools are used by researchers around the world—including at companies such as Intel, GlobalFountries, and Samsung—as well as more than 12,000 nanoHUB users. Klimeck, says that NEMO can, in principle, scale to the largest possible CPU-based supercomputers. "The newly founded Intel Parallel Computer Center will enable the porting and optimization of NEMO5 to Intel Xeon Phi Coprocessor-based Intel hardware and speed up science and engineering dramatically for a very large user group," Klimeck says.

Tom Linton, manager of numerical device modeling at Intel, is a supporter of the work being done at Purdue. “The Intel Parallel Computing Center is already beginning to benefit the Intel Numerical Device Modeling group, which is collaborating with the Purdue NEMO (NanoElectronic Modeling) team to develop the NEMO-5 quantum device simulator for exploration of novel semiconductor devices.”

Related websites:

http://engineering.purdue.edu/gekcogrp/
http://nanohub.org
http://www.itap.purdue.edu/pathmaker/
http://www.purdue.edu/newsroom/releases/2013/Q2/purdue-builds-nations-fastest-campus-supercomputeragain.html

For more complete information about compiler optimizations, see our Optimization Notice.