Invariant and Hierarchical Computation in Human Auditory Cortex

Alex Kell, Massachusetts Institute of Technology

Photo of Alex Kell

Just by listening, humans infer a host of useful information about events in the world. Much is known about peripheral auditory processing, but auditory cortex remains poorly understood, particularly in computational terms. Here I will talk about my work exploring computational properties of cortical responses through the lens of invariance.

I will first discuss how we developed an improved model of cortical responses by optimizing a hierarchical convolutional neural network to perform multiple real-world invariant recognition tasks. Despite not being trained to fit behavioral or neural data, this task-optimized neural network replicated human auditory behavior, predicted neural responses and revealed a cortical processing hierarchy, with distinct network layers mimicking response in distinct parts of auditory cortex.

Motivated in part by analyses probing the network's representations, I will then describe our work studying the neural basis of a central challenge in everyday listening: hearing sources of interest embedded in real-world background noises (e.g., a bustling coffee shop, crickets chirping). We use fMRI to compare voxel responses to natural sounds with and without real-world background noise, finding that non-primary auditory cortex is substantially more noise-robust than primary areas. These results demonstrate a representational consequence of auditory hierarchical organization.

Together, this work suggests a multi-staged hierarchy of auditory cortical computation and it begins to characterize properties of those computations.

Abstract Author(s): Alexander Kell