Alumnus’s Code Could be Key to Fusion Energy

Date

A Department of Energy Computational Science Graduate Fellowship (DOE CSGF) alumnus has demonstrated that an artificial intelligence algorithm he developed can predict problems with a fusion energy reactor even if the code has never encountered data from that reactor. The development could be key to enabling development of a safe, clean and abundant power source.

In a paper published in the journal Nature, former fellow Julian Kates-Harbeck and colleagues also showed that adding high-dimensional reactor operation statistics to the algorithm’s training data significantly improves its ability to predict damaging plasma disruptions.

Kates-Harbeck, a fellow from 2014 to 2018, previously reported on his work with William Tang of the Princeton Plasma Physics Laboratory (PPPL) to apply artificial intelligence to doughnut-shaped fusion reactors called tokamaks. The machines use powerful magnets to contain swirling plasma – a mixture of hydrogen ions and electrons – heated to temperatures hotter than the sun’s interior. Under such conditions, the atomic nuclei fuse, releasing tremendous energy. Scientists want to harness fusion, the process that powers the sun and other stars, as a clean and nearly limitless energy source.

But turbulence and other factors can cause unpredictable disruptions that let the searing plasma escape, damaging reactor walls. Such damage could be ruinous for ITER, a $25 billion international project to build the largest tokamak ever. The reactor, now under construction in France, is expected to generate far more energy than needed to start and maintain the reaction.

Because scientists have no data from this new and entirely different tokamak, any artificial intelligence program must be able to analyze information from other reactors to predict problems on ITER.

In the Nature paper, Kates-Harbeck, Tang and former Princeton University researcher Alexey Svyatkovskiy report evidence that suggests such cross-machine predictions may be feasible. After training their code solely on data from the DIII-D National Fusion Facility in California, the researchers showed it could predict disruptions on the larger Joint European Torus (JET) in the United Kingdom. The predictive capacity improved to better than 90 percent when the training data were supplemented with a small amount of data from JET.

Kates-Harbeck is lead architect of the Fusion Recurrent Neural Network (FRNN), the team’s deep-learning artificial intelligence code, which analyzes masses of multidimensional, time-dependent data from diverse sources. A series of mathematical nodes process the information, seeking a specific output – such as forecasting a plasma disruption. By iteratively analyzing data the nodes learn to identify properties that precede a disruption.

To improve FRNN’s accuracy, the researchers also trained it on profile data, a different, more complex kind of information from that used in previous tests. Profile data capture the dependence of plasma properties such as density on the radius measured from the plasma’s core to its edge.

The software was able to predict disruptions within the 30-millisecond time frame ITER will require. It also is nearing ITER’s goal of correctly predicting disruptions 95 percent of the time while producing false alarms fewer than 3 percent of the time.

Training neural networks demands tremendous computer power and time. To speed the calculations, Kates-Harbeck wrote a distributed training algorithm that uses multiple graphics processor units (GPUs), specially designed chips that accelerate calculations. Besides Princeton University’s Tiger GPU cluster, the team has tested FRNN on Oak Ridge National Laboratory’s Titan and Summit supercomputers and on TSUBAME 3.0, a machine at the Tokyo Institute of Technology. The tests found the algorithm scales well, cutting computational time in direct proportion to the number of GPUs it uses.

Kates-Harbeck first took on devising a disruption-prediction code while on his 2016 DOE CSGF practicum at PPPL, applying lessons he learned in artificial intelligence and machine learning as a Stanford University master’s student in computer science. He continued the collaboration with Tang after returning to Harvard, where his doctoral research focuses on models of how ideas spread across social networks.

For more information, see the PPPL release.