Prof. Dr. Michael Bader is an associate professor of informatics and leads the research group on hardware-aware algorithms and software for high performance computing, which has been established by LRZ and TUM in the context of the installation of the SuperMUC petascale machine at LRZ. He earned a PhD degree in computer science at TUM. As postdoc, he acted as manager of various interdisciplinary projects, such as the Bavarian graduate School of Computational Engineering or the Munich Centre of Advanced Computing. Before accepting the professorship at TUM, he was assistant professor at the SimTech Cluster of Excellence at University of Stuttgart. His research focus is on challenges imposed by latest supercomputing platforms, and the development of suitable efficient and scalable algorithms and software for simulation tasks in science and engineering. During the last years, he concentrated on large-scale simulation of earthquakes based on the code SeisSol and on parallel adaptive mesh refinement approaches for tsunami simulation and porous media flow.
Prof. Dr. Arndt Bode is a full professor for Informatics at TUM since 1987, leading a research group for computer architecture and parallel and distributed computing. Since October 2008 he has also been heading the Leibniz Supercomputer Centre of the Bavarian Academy of Sciences and Humanities in Munich. From 1999 to 2008 he was Vice President and CIO of TUM. He is author of more than 200 publications on parallel and distributed architectures, programming tools and applications. He focuses on the design, implementation and use of parallel and distributed computer architectures and numerical simulation. His research covers methods for efficiently providing high processing power for applications. These can be basic research applications or applications for the development of industrial products and services. As CIO of TUM, Prof. Bode has developed concepts for seamless IT infrastructures for universities.
Prof. Dr. Hans-Joachim Bungartz is a full professor of informatics and mathematics at TUM, where he holds the Scientific Computing chair in the informatics department. He holds degrees in both mathematics and informatics. Since 2013, he has been both Dean of Informatics and TUM Graduate Dean. Dr. Bungartz has served or serves on several editorial, advisory, and review boards. In 2011, he was elected chairman of the German National Research and Educational Network (DFN). Finally, Dr. Bungartz is a board member of Leibniz Supercomputing Centre (LRZ), one of three German national HPC centres. His research interests are where CSE, scientific computing, and HPC meet. He works on parallel numerical algorithms, hardware-aware numerics, high-dimensional problems, and aspects of HPC software, with fields of application such as CFD. Most of his past and present projects – nationally or internationally funded – have been interdisciplinary ones. As an example, he coordinates DFG’s Priority Program Software for Exascale Computing.
The Intel® Parallel Computing Center(s) (Intel® PCC) at Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of Sciences and Humanities and Technische Universität München (TUM, Department of Informatics) is optimizing four different established or upcoming CSE community codes for Intel-based supercomputers. We assume a target platform that will offer several hundred PetaFlop/s based on Intel's x86 (including Intel® Xeon Phi™ coprocessor) architecture. Such a machine (or at least a very similar one) can be expected to become available around 2018 at LRZ. To prepare simulation software for this new platform, we tackle two expected major challenges:
- Achieving a high fraction of the available node-level performance on (shared-memory) compute nodes.
- Scaling this performance up to the range of 10,000 to 100,000 compute nodes.
Concerning node-level performance, we consider compute nodes with one or several Intel® Xeon Phi™ coprocessors. Scalability on large supercomputers is studied on SuperMUC, which – as part of its upgrade in 2015 – will add a 3-PetaFlop/s partition based on Haswell CPUs to the current platform.
We examine four applications from different areas of science and engineering: earthquake simulation and seismic wave propagation with the ADER-DG code SeisSol, simulation of cosmological structure formation using GADGET, the molecular dynamics code ls1 mardyn developed for applications in chemical engineering, and the software framework SG++ to tackle high-dimensional problems in data mining or financial mathematics (using sparse grids). All codes have already demonstrated scalability on SuperMUC (up to petascale), but are in different stages w.r.t. running on Intel® architecture. While addressing the Intel® Xeon Phi™ coprocessor, in particular, the project tackles fundamental challenges that are relevant for most supercomputing architectures – such as parallelism on multiple levels (nodes, cores, hardware threads per core, data parallelism) or compute cores that offer strong SIMD capabilities with increasing vector width.
The PIs at the Department of Informatics at TUM offer an extensive curriculum of courses that provide a contiguous track of parallel programming and HPC courses for Bachelor and Master students in Informatics and CSE, as well as for interested students in Mathematics or different fields of science and engineering. For researchers on PhD and postdoc level, the Leibniz Supercomputing Center provides respective training activities in HPC. The Intel® PCC project will inspire these courses and training activities and also suggest new modules – with respect to the choice of motivating applications and fundamental algorithms, and providing show cases and concrete examples to study.
- March 6, 2018, Optimization of the Gadget Code and Energy Measurements on Second-Generation Intel Xeon Phi, IXPUG Spring 2018
- March 22, 2018, Task-Based Approaches for Molecular Dynamics Simulations, TUM
- P. Borovska, D.Ivanova, December 1, 2015, PRACE: Code Optimization and Scaling of the Astrophysics Software Gadget on Intel Xeon Phi, LRZ (Leibniz Supercomputing Centre)
- Press Release, October 14, 2014, George Michael HPC Fellowships Annouced, LRZ (Leibniz Supercomputing Centre)
- R. Glenn Brook, Alexander Heinecke, Anthony B. Costa, Paul Peltz Jr. Vincent C.Betro, and Troy Baer, Michael Bader, Pradeep Dubey , March 31, 2015, Article on Beacon: Deployment and Application of Intel Xeon Phi Coprocessors for Scientific Computing, LRZ (Leibniz Supercomputing Centre)
- Luigi Iapichino, July 15, 2015, Improving node-level performance in Gadget: data structure and data locality, IXPUG
- Luigi Iapichino, July 15, 2015, Preconditioning for Data Locality, IXPUG
- Luigi Iapichino, November 18, 2015, Vectorisation efficiency in a Gadget kernel, IXPUG
- Fabio Baruffa, Luigi Iapichino, June 22, 2016, Improving the performance of a Gadget kernel on many-core systems from KNC to KNL, IXPUG
- The Intel PCC at LRZ and TUM provided a simulation update during the International Super Computing and Super Computing 2014 on Petascale Seismic Simulations with SeisSol.
- Heinecke, A. Breuer, S. Rettenberger, M. Bader, A.-A. Gabriel, C. Pelties, A. Bode, W. Barth, X.-K. Liao, K. Vaidyanathan, M. Smelyanskiy and P. Dubey: Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis SC14, p. 3–14. IEEE, New Orleans, LA, USA, November 2014. Gordon Bell Finalist.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.