Alex Kell

  • Program Year: 4
  • Academic Institution: Massachusetts Institute of Technology
  • Field of Study: Computational Neuroscience
  • Academic Advisor: Josh McDermott
  • Practicum(s):
    Lawrence Berkeley National Laboratory (2017)
  • Degree(s):
    A.B. Neuroscience, Dartmouth College, 2010
  • Personal URL: http://alexkell.org

Summary of Research

I'm interested in how we hear. From an auditory waveform we recognize what someone said, who said it, and how they felt when they said it. We infer what caused certain sounds and where unseen events occurred in the world. How does the brain achieve these computational feats in a fraction of a second? What representations and algorithms does our auditory system use? To answer these questions I use a mixture of computational models, neural measurements (primarily fMRI), and behavioral experiments.

Publications

Selected conference abstracts:

Kell A., McDermott J. Noise-robustness of cortical responses to natural sounds increases from primary to non-primary auditory cortex. San Diego, CA: Society for Neuroscience, November 2016. (Talk)

Kell A.*, Yamins D.*, Norman-Haignere S., McDermott J. Speech-trained neural networks behave like human listeners and reveal a hierarchy in auditory cortex. Salt Lake City, UT: Computational and Systems Neuroscience (COSYNE), February 2016.

Kell A.*, Yamins D.*, Norman-Haignere S., McDermott J. Functional organization of auditory cortex revealed by neural networks optimized for auditory tasks. Chicago, IL: Society for Neuroscience, October 2015. (Talk)

Kell A.*, Yamins D.*, Norman-Haignere S., Seibert D., Hong H., DiCarlo J., McDermott J. Computational similarities between visual and auditory cortex studied with convolutional neural networks, fMRI, and electrophysiology. St. Pete’s Beach, FL: Visual Science Society, May 2015.

Yamins D.*, Kell A.*, Norman-Haignere S., McDermott J. Using speech-optimized convolutional neural networks to understand auditory cortex. Salt Lake City, UT: COSYNE: Computational Systems Neuroscience, March 2015. (Talk)

Kell A.*, Yamins D.*, Norman-Haignere S., McDermott J. Deep neural networks trained on speech tasks predict auditory cortex responses to natural sounds. Baltimore, MD: Association for Research in Otolaryngology, February 2015.

Kell A.*, Yamins D.*, Norman-Haignere S., McDermott J. Similarities between deep neural networks trained on speech tasks and human auditory cortex. Cambridge, MA: Speech & Audio in the Northeast: SANE 2014, October 2014.

Awards

2015: Vision Sciences Society Best Student Poster Award
2015: Association for Otolaryngology Travel Award
2015: Vision Sciences Society Travel Award
2014: NVIDIA Academic Hardware Donation Program (GPU donation)
2013-2014: Massachusetts General Hospital Neuroimaging Training Program Grant