Computational Geodynamics with PETSc

Richard Katz, Columbia University

PETSc, the Portable Extensible Toolkit for Scientific computation, is among the leading software frameworks for the numerical solution of PDEs on parallel computers. We demonstrate the functionality, scalability and extensibility of PETSc for problems in computational geodynamics using three examples: (1) Creeping mantle flow and transport of heat in a subduction zone with a non-Newtonian, pressure and temperature dependent viscosity; (2) Stress-driven melt segregation and shear localization in partially molten mantle rock; and (3) implementation of an efficient, parallel semi-Lagrangian advection scheme in PETSc.

Several factors are responsible for an expanding reliance on computational methods for the solution of PDE-based geodynamical models. First, these models are increasingly called upon to include complex, non-linear rheologies. Second, modern problems are marked by the need to resolve a wide range of length scales: from the entire mantle of the Earth, to the scale of a fault, to the scale of crystalline grains. While no current models span this entire range, dynamical processes of interest may occupy some fraction of it, requiring significant resolution and hence computational power and memory in excess of that available on a single processor. User-friendly software libraries containing robust linear and non-linear equation solvers that scale to the size of modern parallel computers are thus needed for the advancement of computational geodynamics.

The simulation shown here employ the PETSc framework from Argonne National Laboratory. Using the PETSc DA object, which is an abstraction for parallel, structured grid computations, we need only specify our discretized finite difference/volume equations. The Jacobian and Rhs vector are automatically allocated and assembled. The linear and nonlinear solvers are also abstracted to hide the parallel programming and allow for choice of solution method on the command line. These abstraction layers allow a portable, scalable parallel code to be built from the almost exclusively serial specification of the stencil. We will present scalability results for this code using the Argonne’s linux cluster, Jazz.

Abstract Author(s): Richard Katz, Matt Knepley & Marc Spiegelman