Intel® Parallel Computing Center at Rice University

Principal Investigators:

Mark F. Adams received his Ph.D. in Civil Engineering in 1998 from U.C. Berkeley. He is a staff scientist in the Scalable Solvers Group at Lawrence Berkeley National Laboratory and is an adjunct research scientist in the Applied Physics and Applied Mathematics Department at Columbia University. His research interests are in extreme-scale computing and multi-grid solvers.

Matthew G. Knepley received his B.S. in Physics from Case Western Reserve University in 1994, an M.S. in Computer Science from the University of Minnesota in 1996, and a Ph.D. in Computer Science from Purdue University in 2000. In 2009, he joined the Computation Institute as a Senior Research Associate.


Jed Brown received his doctor of science degree from ETH, Zurich, in 2011. He is an assistant computational mathematician at Argonne National Laboratory and is an adjunct assistant professor at the University of Colorado Boulder.



PETSc will provide a new solver interface to structured adaptive mesh refinement (SAMR), enabling the efficient representation of multi-scale phenomenon, while maintaining the simplicity of structured grid kernel computations. We will use the most asymptotically efficient solvers for strongly nonlinear equations: matrix-free full approximation scheme (FAS) nonlinear full multi-grid (FMG) methods. These formulations have a unique opportunity to leverage modern architectures to deliver fast, accurate, versatile solvers for complex, multi-physics application. In particular, we are working closely with the premier open source, state-of-the-art massively parallel subsurface flow and reactive transport code -- PFLOTRAN. This work is significant in that it radically changes algorithms, data access, and low level computational organization in order to maximize performance and scalability on modern Intel architectures, and encapsulates this knowledge in the PETSc libraries for the broadest possible impact.

Related websites:

For more complete information about compiler optimizations, see our Optimization Notice.