Stochastic Newton MCMC

James Martin, University of Texas

Photo of James Martin

We present a new Markov-chain Monte Carlo (MCMC) method for sampling high-dimensional probability density functions, for example as needed in the solution of statistical inverse problems. This method builds on previous work in Langevin dynamics, which uses gradient information to guide the sampling in useful directions, improving acceptance probabilities and convergence rates.

We extend the Langevin idea to exploit local Hessian information, and develop a stochastic version of Newton’s method, which we show to have optimal sampling properties for Gaussian and near-Gaussian distributions, independent of the dimension of the distribution. Further improvements are achieved using low-rank Hessian approximation. The method is applied to the Bayesian solution of an inverse problem governed by 1D seismic wave propagation. The medium we invert for is parameterized by up to 65 parameters, for which we observe at least two orders of magnitude faster convergence over state-of-the-art blackbox MCMC methods.

Abstract Author(s): Carsten Burstedde, Omar Ghattas, James Martin, and Lucas Wilcox