Toward a Classification of the Computational Advantage of Recurrent Neural Circuits

Brett Larsen, Stanford University

Photo of Brett Larsen

One of the brain's most striking anatomical features is the amount of feedback and lateral/recurrent connections in neural circuits. Despite the ubiquity of this circuit organization, most theoretical studies of recurrent neural networks have focused on analyzing their dynamics rather than considering what computations require a recurrent architecture (as opposed to multi-layer feed-forward networks). To better understand the utility of such connections, we consider a task (detection of edge-connected pixels) in which an efficient recurrent solution exists that propagates local information iteratively to determine global properties of the system. We (1) show that implementing the same solution in a feed-forward network is extremely inefficient (i.e. requiring orders of magnitude more neurons and synaptic connections) and (2) study empirically the trainability of recurrent neural networks to perform this task across a range of different architectures with increasingly restrictive parameter spaces.

Abstract Author(s): Brett Larsen, Shaul Druckmann, Jonathan Amazon