Alexander Williams

Stanford University

It's not surprising that Alexander Williams studies how brains work: His mother is a school psychologist and his father a psychology professor.

“There were a lot of textbooks lying around his office that I would sneak out and read,” says Williams, now a Stanford University doctoral candidate in theoretical neuroscience. More importantly, Williams appreciated the independence his father had to pursue research.

In high school, Williams, a Department of Energy Computational Science Graduate Fellowship (DOE CSGF) recipient, was drawn to biology. But at Bowdoin College he also was attracted to the notion he could use computational models to “ask questions and pose theories in a more precise way than in other areas in biology.” That led him to two years in the laboratory of Brandeis University’s Eve Marder, where Williams helped model the nervous systems of crustaceans, archetypal organisms backed by decades of neural activity data.

But Williams didn’t enjoy – and, he admits, wasn’t good at – laboratory bench work. He realized “that if you’re asking a good scientific question and building the correct model, you can still get the same feeling of discovery” from running a simulation.

Now, with Stanford’s Surya Ganguli, Williams creates techniques to analyze results from neuroscience experiments. Today researchers can record the activity of hundreds or thousands of neurons, but “there’s a big question in the field of what to do with all those data,” Williams says. The mathematical methods he develops will provide statistical descriptions that serve as foundations for computational models.

“You could easily hook up a thousand or more model neurons into a network and they’ll do something, but how do you validate that model?” Williams asks. “You have to understand the statistics of the biological data set in order to constrain the model so it tells you something useful.”

Williams: Tensors

In a June 2018 paper published in the journal Neuron, Williams and a team of researchers described a technique to detect indications of long-term learning in neuron activity recorded while the subjects – in this case, primates or mice – repeatedly performed a task, such as reaching for an object or running through a maze. These signs of enduring change often are difficult to identify amid the noise of short-term neuron behavior.

Researchers usually average neural activity from multiple trials to find patterns. Williams and other researchers seek methods that precisely estimate neural activity every time the organism executes the behavior, an approach called single-trial analysis.

Tensor component analysis (TCA), the method Williams and his colleagues developed, extracts three interconnected descriptions of the trial data: factors reflecting assemblies of neurons; time-based features reflecting rapid circuit activity that mediates perceptions, thoughts and actions; and aspects describing long-term learning and changes in thinking from trial to trial. TCA organizes these into a three-dimensional array similar to a 3-D graph with axes of x, y and z. That lets the researchers mathematically identify variations between the three axes across trials, characterizing longer-term changes.

The team used TCA on data from an artificial neural network trained to detect movement in changing images of dots and from neural activity in mice solving a maze and macaques performing a reaching task. “We wanted to show that this method could be used broadly” rather than focus on a particular application, Williams says. Although the method ignored the specific task the animals performed and focused solely on the data, it identified neural activity that correlated with particular actions.

The method relies on techniques Williams explored during his 2016 DOE CSGF practicum with Sandia National Laboratories mathematician Tamara Kolda, a coauthor on the later Neuron paper. “I came back from Sandia convinced that we needed to try this tensor decomposition approach on neural data.” The practicum “ended up shaping my thesis directly but also gave me a really good blueprint for follow-up projects that we’re now trying to execute.”

Williams served a summer 2018 internship at Google and still works there one day a week on a research collaboration that dovetails with his thesis and includes Ganguli. Williams is devising statistical methods that the team will use to analyze data from artificial neural networks, seeking to compare the insights with those derived from biological neural data.

The association with Google gives Williams a peek at working in the technology industry, but he’s not convinced that’s where his future lies. After graduation in June 2019, he’ll move directly into a postdoctoral research post at Stanford and then plans to follow his father into academia.

Image caption: The tensor component analysis method Alexander Williams and his colleagues applied extracts and reduces neuron activity data over time and through several trials, helping them mathematically identify variations that indicate learning behavior. Credit: Alexander Williams.