The Deterministic Information Bottleneck Method

Daniel Strouse, Princeton University

Photo of Daniel Strouse

A fundamental problem living organisms face is prediction. Accurate predictions of the locations of food and predators, for example, are matters of life and death. The basis of those predictions are, of course, past sensory experience. Since an individual’s memory resources are limited and costly, however, there is a tradeoff between memory and prediction. The information bottleneck (IB) method (Tishby, Pereira, and Bialek, 2000) formulates this tradeoff as a mathematical optimization problem with an information theoretic cost function. The cost function encourages storing as few bits from the past as possible, while selectively maintaining those bits that are most predictive of the future. The optimization is done over the stochastic encoding distribution q(t|x), where x and t are random variables representing past observations and encoded memory, respectively. The optimization is done via an iterative algorithm except in a select few simple cases for which an analytic solution is available.

Here, we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function that more accurately represents the goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of changing the stochastic encoding distribution q(t|x) into a deterministic encoding function t(x). Next, we introduce an iterative algorithm for solving the DIB problem. Finally, we compare the IB and DIB methods and discuss their relative advantages and appropriate uses.

Abstract Author(s): D.J. Strouse, David Schwab