Matthew Carbone

Columbia University

When Matthew Carbone faces difficult scientific choices, he has a default strategy: pick both. For example, instead of choosing to focus on one of his two favorite sciences as an undergraduate, chemistry or physics, he pursued a double degree at the University of Rochester.

That approach continues to guide his research path as a graduate student and Department of Energy Computational Science Graduate Fellowship (DOE CSGF) recipient. At Columbia University, he works on condensed matter physics theory. Meanwhile, Carbone maintains an equally important set of machine-learning projects from a summer of research at Brookhaven National Laboratory. “My practicum started in 2018 and never ended,” he says.

In David Reichman’s group at Columbia, Carbone has been examining the theory behind various excitations in transition metal dichalcogenides. These semiconducting materials can form atomically thin layers and could be useful for a range of applications, particularly optoelectronics. He’s also tried using computation to understand the GW approximation, a theoretical way of describing and solving quantum many-body systems. Recent research suggests that this 50-year-old approximation could contain a fundamental physical error, and Carbone hopes to test that hypothesis with intensive simulations that include a correction.

For his Brookhaven practicum, Carbone wanted to branch out and work in a different field, machine learning, although he had no formal training in the subject. “I learned it all from Coursera,” he says, “but my practicum advisor, Shinjae Yoo, has been and continues to be nothing but supportive of my learning process.”

Carbone: neural net

In their initial project, Carbone, Yoo and their BNL colleagues trained a classification algorithm on X-ray absorption spectroscopy (XAS) data to see if they could predict the geometries of absorbing sites from new XAS data. It worked. Researchers running these experiments could use such a tool to quickly translate the absorption peaks in their results into information about an absorbing molecule and how its geometry changes, for example, during a chemical reaction. The team published those results in Physical Review Materials in early 2019.

As the summer ended, Carbone and his BNL colleagues realized that they didn’t want their work to end, or even to move to the back burner. The fellow checked with Reichman, who encouraged him to continue alongside his Columbia research. Since then, Carbone and his BNL colleagues have published a 2020 paper in Physical Review Letters that examined his original XAS problem in reverse – using molecule structures to predict XAS spectra. He continues to support other ongoing BNL machine-learning research and contributes to the lab’s Computational Science Initiative project to search for COVID-19 drug candidates. “This is a really cool opportunity for me because I get to work with all these exceptional scientists who are super-talented while learning a ton and, hopefully, I can contribute something really useful.”

The fellowship also has given Carbone flexibility to pursue other side projects. He’s collaborated with another DOE CSGF recipient, Steven Torrisi of Harvard University, on a machine-learning project. And Carbone recently published a paper on yet another topic, entropically activated trap dynamics, with Marco Baity-Jesi, a former Reichman group postdoctoral researcher now on the faculty at Eawag, the Swiss Federal Institute of Aquatic Science and Technology.

Carbone credits fellowship support – particularly for research opportunities outside his Columbia projects – for shaping his scientific career. “The CSGF enabled me to really know what it’s like to be a real scientist.” Graduate school is an important training ground, Carbone notes, but it doesn’t always foster the same level of collaboration. He also has gained the confidence to know when he’s stuck on a problem and should seek expert help. “There's no shame in doing that. In fact, it's part of the process.”

Carbone will finish his Ph.D. in 2021. He’d like to continue collaborating with BNL and expects to pursue positions at a national lab or in industry. But – as with other science decisions – he doesn’t rule out wearing multiple hats, working “two jobs because I'm essentially already doing it.”

Image caption: With machine-learning strategies, Matthew Carbone and his Brookhaven National Laboratory colleagues have used chemical structures (lower left) to predict X-ray absorption spectra (top right) and X-ray absorption spectra to predict the geometry of the absorbing molecular site. When forecasting spectra from structures (blue arrow), they use message-passing neural networks (blue balls with double-headed red arrows). When predicting structural information from spectra (red arrow), they use multi-layer perceptrons (blue balls connected by red lines). These tools could help researchers more rapidly analyze the results of X-ray absorption experiments. Credit: Matthew Carbone.