Intel® HPC Developer Conference 2017
Keynote & Plenary Sessions
Keynote & Plenary Sessions
Technology visionaries architecting the future of high-performance computing (HPC) and artificial intelligence (AI) shared key challenges and Intel’s direction. They also discussed adapting AI into HPC workflows, their perspectives for architectural developments, upcoming transitions and range of solutions, technology opportunities, and the driving forces behind them.
Joe Curley, Gadi Singer, and Dr. Al Gara, Intel
On September 14, 2015, the two detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) made the first direct observation of gravitational waves from two merging black holes. On August 17, 2017, the LIGO and Virgo observatories detected gravitational waves from the merging of two neutron stars—an event seen as both a short gamma-ray burst and subsequent kilonova by space- and ground-based observatories. These and other discoveries mark the beginning of gravitational wave astronomy. In this talk, we highlight what we have learned and hope to learn in this new field, and point out many of the ways in which high-throughput and high-performance computing have been essential to its progress.
Dan Stanzione, executive director of the Texas Advanced Computing Center (TACC), University of Texas at Austin
Joshua L. Willis, PhD, computational scientist, California Institute of Technology, Laser Interferometer
Gravitational-Wave Observatory (LIGO) Lab
Deep learning has revolutionized the fields of computer vision, speech recognition, and control systems. Can deep learning work for scientific problems? This talk explores a variety of Lawrence Berkeley National Laboratory applications that are currently benefiting from deep learning. We cover:
Prabhat, director of the Big Data Center at NERSC
Michael F. Wehner, senior staff scientist in the Computational Research Division at Lawrence Berkeley National Laboratory
Intel recently announced important progress in the research into future novel microarchitectures and device technologies: neuromorphic and quantum computing. Neuromorphic computing draws inspiration from our current understanding of the brain’s architecture and its associated computations. Loihi, Intel's recently announced neuromorphic research chip, is extremely energy-efficient, uses data to learn and make inferences, gets smarter over time, and does not need to be trained in the traditional way. Quantum computing offers the potential for exponentially greater performance on many algorithms that are computationally challenging on today’s computing architectures. We’ve just delivered a 17 superconducting qubit chip to our research partner QuTech (TU-Delft and TNO) in the Netherlands for measurement and evaluation as part of our investigation into full computing system stacks for two quantum device technologies. This talk gives a brief overview of our directions and progress in developing these novel architectures
Jim Held, PhD, Intel