Jenelle Feather

Massachusetts institute of Technology

Although Jenelle Feather had always liked math and science, a summer program at the Pennsylvania Governor’s School for the Sciences at Carnegie Mellon University set her on the path toward a research career. The Department of Energy Computational Science Graduate Fellowship (DOE CSGF) recipient took five-week courses in physics, math, biology and computer science. “I like solving puzzles, and science really is just solving puzzles,” she says.

Those experiences led to an undergraduate research position at the Massachusetts Institute of Technology, where Feather filled a spot formerly occupied by one of her governor’s school teaching assistants. Chris Moore’s lab studied neural oscillations using optogenetics, shining lasers into mouse brain regions to change their activity. Feather set up experiments and made sure that the apparatus worked. Moore also gave her data-analysis side projects that involved coding, teaching her valuable skills that supported her future work.

After majoring in both physics and brain and cognitive sciences, Feather felt torn between a clinical career in medical physics and a research career in neuroscience. But she was already fascinated with sound and how the inner ear’s basilar membrane transforms the mechanical signal of pressure waves in air into a biological one. “It’s a process akin to a Fourier transform, where it will respond in different locations to different frequencies,” she says. “I just found it really, really beautiful.”

Feather took a research associate position in Nancy Kanwisher’s MIT laboratory, where she worked on a range of projects for two years, including one using mapped electrical signals to show that humans have a distinct brain region that responds to hearing song. In 2015, she began graduate work in a joint program at the University of California, Berkeley, and the University of California, San Francisco. After joining the fellowship, she transferred to Josh McDermott’s MIT laboratory, where she has been improving machine-learning models of the human auditory system.

Researchers have used neural networks (a form of machine learning) to model human brain and behavioral responses for years, and people and machines now match far better than they once did. “Basically, we have these models,” Feather says. “We think that they're really, really good. But there are a lot of things that are a little bit unsatisfying about them.” So she and her colleagues designed experiments to compare the ways that humans and machines process inputs – images or sound – and output a behavioral response.

Both human perception and deep neural networks have invariances – ways in which an input can change but the output will be the same. To study these invariances, researchers use metamers, stimuli that include different information but are perceived in the same way. A classic example is the color metamer: The typical human eye processes colors by combining red, green and blue wavelengths, and two different combinations produce the same response in the eye and are perceived the same way.

Feather: metamers

Feather and McDermott developed a series of images and sounds that were metamers for machine-learning models of human perceptual systems. When they tested these combinations, humans did not recognize the synthetic versions when they were generated from late stages of the neural network, even though they were recognizable to the models. The work highlights key differences between human and machine invariances and suggests that these models could be improved.

“They might be good models for some purposes, but if we want to have a complete representation of a human sensory system, something is lacking,” Feather says. They’ve shown that one way to improve models is reducing aliasing, a common data-filtering process that makes the algorithms easier to train. In addition to helping researchers understand how brains process the auditory world, their results could improve speech recognition platforms and hearing aids.

The work – particularly training audio models – is computationally intensive, and to process the data efficiently Feather and her colleagues ensure that the input is parallelized in different ways. To do this work, Feather has used Oak Ridge National Laboratory’s Summit supercomputer and Satori, an IBM cluster with a similar architecture at MIT.

For her 2018 Lawrence Livermore National Laboratory practicum, Feather examined whether a generative adversarial network, which classifies inputs as real or fake, could be trained to detect outliers in medical images. Such a tool could flag abnormal images and help doctors diagnose disease. During summer 2019, she interned at Google and worked on speech synthesis platforms.

After graduating in late 2021, Feather expects to take a postdoctoral post and pursue an academic career. But her core interest is in research, and she’s open to opportunities in industry or at the national laboratories.

Caption: In a recent study, Jenelle Feather and advisor Josh McDermott used images and audio clips derived from layers of a neural network to study how computers and machines perceive inputs called model metamers. For the imaging study they used three model metamers that a classification algorithm recognized as a watch: an original photo of a watch (left), an image an early layer of a classification algorithm produced (middle), and a final image from a late stage of the neural network (right). Humans can only recognize the first two images; the other appears to be noise. In their auditory study, Feather and McDermott used sound model metamers such as these clips of the words "model human perception": the original spoken audio clip (top left), a clip generated by the early layers of a neural network (top right), and one generated from the late stages of the neural network (bottom left). As with visual images, the machines recognize the words within all three, while humans can only recognize words in the first two. Credit: Jenelle Feather.