Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB)


Principal Investigators:

Dr. Thomas SteinkeDr. Thomas Steinke

Thomas is head of the HPC dept. at the Zuse Institute Berlin (ZIB). His research interest is in high-performance computing, heterogeneous systems for scientific and data analytics applications, and parallel simulation methods. Thomas co-founded the OpenFPGA initiative in 2004, and he leads the Intel® Parallel Computing Center (Intel® PCC) at ZIB. He received the doctoral degree in Theoretical Chemistry from the Humboldt-Universität zu Berlin in 1990.


Florian WendeFlorian Wende

Florian is part of the Distributed Algorithms and Supercomputing department at Zuse Institute Berlin (ZIB). He is interested in accelerator and many-core computing with application in Computer Science and Computational Physics. His focus is on load balancing of irregular parallel computations and on close-to-hardware code optimization. He received a Diploma degree in Physics from Humboldt Universität zu Berlin and a Bachelor degree in Computer Science from Freie Universität Berlin.


Matthias NoackMatthias Noack

Matthias is part of the Distributed Algorithms and Supercomputing group at Zuse Institute Berlin (ZIB). His interests include parallel programming models, heterogeneous architectures, and scientific computing. He developed the Heterogeneous Active Messages (HAM) framework, which provides efficient offloading, local and over fabric, for multi- and many-cores. Matthias currently focuses on runtime compilation techniques, portable programming methods for vectorization, as well as optimization and scaling of the Hierarchical Equations of Motion (HEOM) method.



Intel Corporation and Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) have set up a "Research Center for Many-core High-Performance Computing" at ZIB. This Center will foster the uptake of current and next generation Intel many- and multi-core technology in high performance computing and big data analytics. The Intel® PCC at ZIB is focusing on a diverse set of codes including VASP which is targeted at atomic scale materials modelling.

The activities of the "Research Center for Many-core High-Performance Computing" are focused on enhancing selected workloads with impact on the HPC community to improve their performance and scalability on many-core processor technologies and platform architectures. The selected applications cover a wide range of scientific disciplines including materials science and nanotechnology, atmosphere and ocean flow dynamics, astrophysics, quantum physics, drug design, particle physics and big data analytics. Novel programming models and algorithms will be evaluated for the parallelization of the workloads on many-core processors.

The workload optimization for many-core processors is supported by research activities associated with many-core architectures at ZIB, where novel programming models and algorithms for many-core architectures are developed and evaluated.

Furthermore, the parallelization work is complemented by dissemination and education activities within the Northern German HPC Alliance "HLRN" to overcome the barriers involved with the introduction of upcoming highly parallel processor and platform technologies

"We are delighted to enter into a multi-year cooperation with Intel" said Prof. Alexander Reinefeld, head of the computer science department at Zuse Institute Berlin. "Our goal is to port and optimize selected HPC codes for Intel many-core processors with a special focus on maximum performance and scalability"


Related Websites:

IPCC @ ZIB: Strategic Overview
IPCC @ ZIB: Project

Additional Sites:

On Enhancing 3D-FFT Performance in VASP -- CUG'16, London, UK, 05/2016

Explicit Vectorization in VASP – IXPUG 09/2015

OpenCL: There and Back Again – IXPUG 09/2015

Improving Thread Parallelism and Asynchronous Communication in VASP  – IXPUG 09/2015

Runtime Kernel Compilation for efficient vectorisation – IXPUG 09/2015

Efficient SIMD-code generation with OpenCL and OpenMP 4.0 – ISC'15 IXPUG BoF, 07/2015

For more complete information about compiler optimizations, see our Optimization Notice.