One of the brain's most striking anatomical features is the amount of feedback and lateral/recurrent connections in neural circuits. Most theoretical studies of recurrent neural networks, however, have focused on analyzing the impact of recurrence on tasks that have a direct temporal nature, rather than considering what general computations would benefit from a recurrent (as opposed to feed-forward) architecture. In this work, we consider two answers to this question: First, recurrent networks can much more efficiently perform tasks that require repeated, local computations to propagate information on some connectivity space. And second, recurrent models can be trained to perform such tasks more easily the more we know about the structure of this local computation; in essence, the better prior we have about which neurons are considered local in the calculation.

In considering such computations, we build on the work of Minsky and Roelfsema studying a task (detecting edge-connected pixels) for which an efficient recurrent solution exists but is extremely inefficient to implement in a feed-forward network. We extend this work by empirically studying how stronger priors of the computation structure affects a RNN's ability to learn such tasks. Finally, we consider a number of generalizations to the idea of repeated local propagation of information, including propagating multiple "tags" and more complex underlying connectivity spaces. These extensions reveal and suggest that computations of this type are found in a broad array of tasks in reinforcement learning and inference.

Abstract Author(s)
Brett Larsen, Shaul Druckmann
University
Stanford University