Over the past decade, unparalleled growth in data availability and computational power has driven extensive development of new algorithms and statistical methods capable of handling and exploiting such resources. Indeed, from deep learning to variational inference, this increase in data and computation has led to the creation of new generations of statistical models and algorithms whose complexity and size are ever-expanding. Unfortunately, however, rigorous understanding of the mechanisms of these new, complex models has lagged. This means that now, more than ever, there is a need for a new approaches for understanding these mechanisms and guiding the development of further algorithms.

Using a set of recently developed tools from the field of optimal transport, we present a new rigorous methodology for understanding and fitting a suite of recently developed models in machine learning. We show how current algorithmic procedures, through the lens of optimal transport, can be interpreted as discretizations of an infinite dimensional gradient flow and how this interpretation yields new, better-performing methods, along with techniques that are robust to adversarial manipulation.

Abstract Author(s)
Carson Kent, Jose Blanchet
University
Stanford University