Carnegie Mellon University
Danielle Rager took to the violin immediately when she began playing 19 years ago. Now, as a neural computation doctoral candidate at Carnegie Mellon University (CMU) in Pittsburgh, “it’s definitely my major means for exercising a different creative part of my brain.”
It’s also a reason she’s interested in how our brains interpret and react to sensory signals, the input they receive from touching, smelling, seeing and hearing the world around us. Rager, a Department of Energy Computational Science Graduate Fellowship (DOE CSGF) recipient, began studying BMIs, or brain-machine interfaces – high-tech electronic connections that let paraplegics control prosthetic limbs with their thoughts.
BMIs work fairly well, but Rager is frustrated that they’re feed-forward. “The only feedback the patient gets on where that robotic arm is in space” is by watching, she says. Their brain can’t receive sensory information about the limb’s position in space or whether it’s touching something. Without that, Rager says, amputees and spinal-cord injury patients are unlikely to ever do fine-motor tasks like preparing food – or playing an instrument.
That drew Rager to study the somatosensory system, which provides tactile feedback, including proprioception, a person’s sense of her body’s position in space. At the Center for the Neural Basis of Cognition, a joint Carnegie Mellon-University of Pittsburgh institution, she worked with CMU’s Valérie Ventura to model decoding algorithms for the brain’s primary motor cortex – the region that sends commands to limbs – that also account for feedback received from the somatosensory cortex. In a paper awaiting publication, Rager argues that existing decoding algorithms may not work with new BMIs designed to provide proprioceptive feedback.
Rager recently turned to a more computationally intensive modeling project: mimicking the biology of neural networks at the cellular level to understand the basis of sensory perception. “This is fundamentally how we see, how we hear, how we feel,” she says. What she learns could influence robotics, artificial intelligence and brain research. “There’s a lot of things we need to learn about how humans do computation in order to work in these other fields.”
With Pittsburgh’s Brent Doiron, Rager models the way neurons communicate via spiking – almost instantaneous sharp voltage increases. The brain interprets spikes as a code based on how quickly or when they occur. “Looking at it from afar, it’s not dissimilar from Morse code in that sense – how you send dots and dashes at proper intervals.”
Rager’s research looks at how the brain’s wiring architecture – its network of synaptic connections – contributes to spiking patterns that neuron populations produce. Her models concentrate on the regions, or subnetworks, within the brain’s 100 billion neurons that conduct specific tasks, such as detecting contours of a visual scene.
“You can think of this as a graph problem: How are these millions of neurons, even in a single subnetwork, connected?” Rager says. “The graph architecture will determine the electrical dynamics of the overall network – how spikes are able to be sent” through it. Certain spiking dynamics and, by extension, graph architectures enable computations the brain needs to interpret such things as somatosensory information.
Rager’s models are mechanistic representations of brain microcircuits – specialized graphs of up to 100,000 neurons that perform a highly targeted task. Although relatively small, these models still are computationally demanding. Each cell is essentially its own circuit and the simulation must run repeatedly, each time with different parameters to understand their influence on output.
Rager’s 2015 practicum at Lawrence Berkeley National Laboratory considered similar behavior. Computational neuroscientist Kristofer Bouchard studies auditory system data recorded via electrocorticography (ECoG), in which surgeons place an array of electrodes directly on the brain surface. ECoG is less invasive than BMI implants but provides information from a pooled group under an electrode rather than from a single neuron. That makes it harder to understand how much of microcircuit neuron’s response is due to its preferential or tuned reaction to a stimulus (such as a specific tone) versus information it receives via recurrent connections to other microcircuit neurons.
Rager worked on a computational model to help decouple these factors and tested it on small neural data sets. The simulation is computationally demanding, requiring parallel processing on high-performance computing (HPC) systems. To parallelize the code, she worked with a computer scientist who completed the job after her practicum ended.
The experience helped Rager appreciate the national laboratories’ collaborative atmosphere and sparked a deeper interest in HPC. Those influences may lead her to pursue research and development positions at a lab or in industry after graduation in 2019.
Image caption: Jan Scheuermann, who lost the use of her limbs to a degenerative disease, controls a robotic arm using signals decoded directly from neural activity in her brain’s motor cortex. Danielle Rager’s research explored the impact of proprioceptive feedback, or the nervous system's electrical encoding of limb position in space, on Scheuermann's decoded motor commands. Credit: University of Pittsburgh.