Intel® Parallel Computing Center at Leibniz Supercomputing Centre and Technische Universität München

Principal Investigators:

Michael Bader

Prof. Dr. Michael Bader: an associate professor of informatics and leads the research group on hardware-aware algorithms and software for high performance computing, which has been established by LRZ and TUM in the context of the installation of the SuperMUC petascale machine at LRZ. He earned a PhD degree in computer science at TUM. As postdoc, he acted as manager of various interdisciplinary projects, such as the Bavarian graduate School of Computational Engineering or the Munich Centre of Advanced Computing. Before accepting the professorship at TUM, he was assistant professor at the SimTech Cluster of Excellence at University of Stuttgart. His research focus is on challenges imposed by latest supercomputing platforms, and the development of suitable efficient and scalable algorithms and software for simulation tasks in science and engineering. During the last years, he concentrated on large-scale simulation of earthquakes based on the code SeisSol and on parallel adaptive mesh refinement approaches for tsunami simulation and porous media flow.

Arndt Bode

Prof. Dr. Arndt Bode: full professor for Informatics at TUM since 1987, leading a research group for computer architecture and parallel and distributed computing. Since October 2008 he has also been heading the Leibniz Supercomputer Centre of the Bavarian Academy of Sciences and Humanities in Munich. From 1999 to 2008 he was Vice President and CIO of TUM. He is author of more than 200 publications on parallel and distributed architectures, programming tools and applications. He focuses on the design, implementation and use of parallel and distributed computer architectures and numerical simulation. His research covers methods for efficiently providing high processing power for applications. These can be basic research applications or applications for the development of industrial products and services. As CIO of TUM, Prof. Bode has developed concepts for seamless IT infrastructures for universities.

Hans-Joachim Bungartz

Prof. Dr. Hans-Joachim Bungartz: full professor of informatics and mathematics at TUM, where he holds the Scientific Computing chair in the informatics department. He holds degrees in both mathematics and informatics. Since 2013, he has been both Dean of Informatics and TUM Graduate Dean. Dr. Bungartz has served or serves on several editorial, advisory, and review boards. In 2011, he was elected chairman of the German National Research and Educational Network (DFN). Finally, Dr. Bungartz is a board member of Leibniz Supercomputing Centre (LRZ), one of three German national HPC centres. His research interests are where CSE, scientific computing, and HPC meet. He works on parallel numerical algorithms, hardware-aware numerics, high-dimensional problems, and aspects of HPC software, with fields of application such as CFD. Most of his past and present projects – nationally or internationally funded – have been interdisciplinary ones. As an example, he coordinates DFG’s Priority Program Software for Exascale Computing


The Intel® Parallel Computing Center (Intel® PCC) at Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences and Humanities and Technische Universität München (TUM, Department of Informatics) is optimizing four different established or upcoming CSE community codes for Intel-based supercomputers. We assume a target platform that will offer several hundred PetaFlop/s based on Intel's x86 (including Intel Xeon Phi™ coprocessors) architecture. Such a machine (or at least a very similar one) can be expected to become available around 2018 at LRZ. To prepare simulation software for this new platform, we tackle two expected major challenges:

  1. Achieving a high fraction of the available node-level performance on (shared-memory) compute nodes.
  2. Scaling this performance up to the range of 10,000 to 100,000 compute nodes.

Concerning node-level performance, we consider compute nodes with one or several Intel Xeon Phi™ coprocessors. Scalability on large supercomputers is studied on SuperMUC, which – as part of its upgrade in 2015 – will add a 3-PetaFlop/s partition based on Haswell CPUs to the current platform.

We examine four applications from different areas of science and engineering: earthquake simulation and seismic wave propagation with the ADER-DG code SeisSol, simulation of cosmological structure formation using GADGET, the molecular dynamics code ls1 mardyn developed for applications in chemical engineering, and the software framework SG++ to tackle high-dimensional problems in data mining or financial mathematics (using sparse grids). All codes have already demonstrated scalability on SuperMUC (up to petascale), but are in different stages w.r.t. running on Intel architecture.  While addressing the Xeon Phi™ coprocessor, in particular, the project tackles fundamental challenges that are relevant for most supercomputing architectures – such as parallelism on multiple levels (nodes, cores, hardware threads per core, data parallelism) or compute cores that offer strong SIMD capabilities with increasing vector width.

The PIs at the Department of Informatics at TUM offer an extensive curriculum of courses that provide a contiguous track of parallel programming and HPC courses for Bachelor and Master students in Informatics and CSE, as well as for interested students in Mathematics or different fields of science and engineering. For researchers on PhD and postdoc level, the Leibniz Supercomputing Center provides respective training activities in HPC. The Intel® PCC project will inspire these courses and training activities and also suggest new modules – with respect to the choice of motivating applications and fundamental algorithms, and providing show cases and concrete examples to study.

Publications & Presentations

Related websites:

Leibniz Supercomputing Centre:
Chair of Scientific Computing (Dept. of Informatics) at TUM:

For more complete information about compiler optimizations, see our Optimization Notice.