Intel® Parallel Computing Center at Lawrence Berkeley National Laboratory
Published on June 2, 2014, updated September 25, 2018
Principal Investigators:
Nicholas J. Wright, Advanced Technologies Group Lead, National Energy Research Computing Center
Bert de Jong, Scientific Computing Group Lead, Computational Research Division
Hans Johansen, Applied Numerical Algorithms Group, Computational Research Division
Description:
The Intel® Parallel Computing Center (Intel® PCC) at Lawrence Berkeley National Laboratory will advance the open-source NWChem and CAM5 (Community Atmospheric Model) applications on next generation multicore high-performance computing systems. The aim is to create optimized versions of these important and widely used scientific applications that will enable the scientific community to pursue new frontiers in the fields of chemistry and materials and climate modeling.
The goal is to deliver enhanced versions of NWChem and CAM-5 that at least double their overall performance on a manycore machine of today over the course of the project. The research and development will be focused upon implementing greater amounts of parallelism in the codes, starting with simple modifications such as adding/modifying OpenMP pragmas and refactoring to enable vectorization to repeatable patterns for performance improvement, all the way to exploring new algorithmic approaches that can better exploit manycore architectures. Both applications are open source and therefore any modifications made will be available to the whole community of users, maximizing the impact of the project.
We will also undertake an extensive outreach and education effort, to ensure that the lessons learned are disseminated to the broader user community at the National Energy Research Scientific Computing center (NERSC). The aim will be to supplement the training and outreach efforts NERSC is already undertaking to support its users on its current Intel Xeon® (Ivybridge) based Cray XC30 supercomputer ‘Edison’. Additionally, the work will form part of the application-readiness efforts NERSC is undertaking as part of the expected delivery of its Intel Xeon Phi™ (Knights Landing) based Cori supercomputer in 2016.
Publications:
- E. Wes Bethel, Junmin Gu, Burlen Loring, Dmitriy Morozov, Gunther H. Weber, John Wu (LBNL). Nicola Ferrier, Silvio Rizzi (ANL). Dave Pugmire, James Cress, Matthew Wolf (ORNL). Earl Duque, Brad Whitlock (Intelligent Light). Utkarsh Ayachit, David Thompson, Andrew Bauer, Patrick O’Leary (Kitware), 7/10/2018, In Situ Analysis and Visualization with SENSEI, IXPUG Vis Workshop 2018, Conference
- Apra, E; Klemm, M; Kowalski, K , 1/1/2015, Performance tests of the Xeon Phi implementation of non-iterative part of the CCSD(T) approach, LBNL - Lawrence Berkeley Labs
- Lawrence Berkeley National Laboratory, 7/9/2014, Codes for Studying Climate Change, Chemistry Focus of Lab?s Intel Parallel Computing Center, LBNL - Lawrence Berkeley Labs
- Hongzhang Shan, Samuel Williams, Wibe de Jong, Leonid Oliker, 2/1/2015, Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture, LBNL - Lawrence Berkeley Labs
- David Ozog, Amir Kamil, Yili Zheng, 7/21/2016, A Hartree-Fock Application using UPC++ and the New DArray Library, IEEE International Parallel & Distributed Processing Symposium
- Hongzhang Shan, Samuel Williams, Wibe de Jong, Leonid Oliker, 2/8/2016, Thread-Level Parallelization and Optimization of NWChem for the Intel MIC Architecture, PMAM 2015
- Hongzhang Shan, Brian Austin, Wibe De Jong, Leonid Oliker, Nicholas Wright, 10/1/2014, Performance Tuning of Fock Matrix and Two-Electron Integral Calculations for NWChem on Leading HPC Platforms, Springer
- Zhengji Zhao, 11/18/2015, Estimating the Performance Impact of the HBM on KNL Using DualSocket Nodes, IXPUG
- Yun(Helen) He, 9/21/2016, Process and Thread Affinity with MPI/OpenMP on KNL, IXPUG
- Jack Deslippe, Brandon Cook, Richard Gerber, Zakhar Matveev, Mathieu Lobet, Tuomas Koskela, Tareq Malas, 9/21/2016, Optimizing Codes Using the Roofline Model, IXPUG
- Agrima Bahl, Brian Austin, 9/21/2016, Understanding Knight's Landing HIgh Bandwidth Memory Using the STREAM Benchmark, IXPUG
- Andrey Ovsyannikov, 9/21/2016, Enabling High-performance Simulation of Subsurface Flows and Geochemical Processes with Chombo-Crunch on Intel Xeon Phi™, IXPUG
- Tareq Malas, Thorsten Kurth, and Jack Deslippe, 9/21/2016, Scaling the Performance of a FDFD Geophysical-imaging Application to Multi-node KNL Clusters, IXPUG
- Tuomas Koskela, 9/21/2016, Optimizing Magnetic Fusion PIC Code XGC1 for the Intel Xeon Phi™, IXPUG
- Ruizi Li, Dhiraj Kalamkar, Ashish Jha, Steven Gottlieb, Carleton DeTar, Doug Toussaint, Balint Joo, and Douglas Doerfler, 6/23/2016, Porting the MIMD Lattice Computation (MILC) Code to the Intel Xeon Phi™ Processor, IXPUG
- Gouglas Doerfler, Jack Deslippe, Samuel Williams, Leonid Oliker, Brandon Cook, Thorsten Kurth, Mathieu Lobet, Tareq Malas, Jean-Luc Vay and Henri Vincenti, 6/23/2016, Applying the Roofline Performance Model to the Intel Xeon Phi™ Processor, IXPUG
- Thorsten Kurth, Balint Joo, Dhiraj Kalamkar, Aaron Walden, Karthikeyan Vaidyanathan, 6/23/2016, Optimizing Dirac Wilson Operator and linear solvers for Intel KNL, IXPUG
- Tareq Malas, Thorsten Kurth, and Jack Deslippe, 6/23/2016, Optimization of the matrix-vector products of an IDR Krylov iterative solver for the Intel KNL manycore processor, IXPUG
- Jack Deslippe, 6/23/2016, Optimizing Excited-State Electronic-Structure Codes for the Intel Xeon Phi™: a Case Study on the BerkeleyGW Software, IXPUG
- Brandon Cook, Pieter Maris, Meiyue Shao, Nathan Wichmann, Marcus Wagner, John O'Neill, Thang Phung and Gaurav Bansal , 6/23/2016, High performance optimizations for nuclear physics code MFDn on KNL, IXPUG
- Mathieu Lobet, Jean-Luc Vay, Henri Vincenti, Remi, Lehe, Ankit Bhagatwala, Jack Deslippe, 6/23/2016, PICSAR: a high-performance library for Particle-In-Cell codes optimized for Intel Xeon Phi KNL architecturesÿ, IXPUG