University of Chicago
It’s common advice: Breakfast is the most important meal of the day. For Nicholas Frontiere, however, it may have been the most important meal of his career.
Frontiere, a Department of Energy Computational Science Graduate Fellowship (DOE CSGF) recipient, was a home-schooled teenager living near Los Alamos, New Mexico, home of a DOE national laboratory. He’d quickly completed his mother’s courses and was flying through ones available at a local community college.
Frontiere’s father, a retired composer, ate at a nearby café every morning and fell into an informal breakfast club. One regular, a former Los Alamos National Laboratory (LANL) scientist, introduced the younger Frontiere to lab astrophysicist Ed Fenimore.
At 16, Nicholas Frontiere (pronounced “FRONterry”) began working at the birthplace of the atomic bomb. Under Fenimore’s tutelage, he gained a security clearance and honed his computer science skills on classified projects.
The collaboration continued as Frontiere attended the University of California, Los Angeles, but the work’s sensitive nature prohibited him from publishing scientific papers. Fenimore asked LANL physicists Katrin Heitmann and Salman Habib about finding Frontiere a group doing open research.
“We said, ‘Well, he should work with us,’” Habib says. “That’s how it started.”
The team Habib and Heitmann lead builds simulations of the universe’s evolution and runs them on the world’s fastest supercomputers. The calculations, some of the biggest and most detailed yet, track trillions of tracer particles representing all the matter in the observable universe. As the particles evolve, gravity pulls them into clumps and filaments of visible galaxies and haloes of dark matter over billions of years, from near the Big Bang to today.
“They opened my eyes to the high-performance (computing) world, and I’ve been loving it and stuck in it ever since,” Frontiere says. “I want to use this tool for whatever physics I come across.” He kept working with the group even as it moved to Argonne National Laboratory and the University of Chicago.
The result: While still an undergraduate, Frontiere contributed to cosmology models in 2012 and 2013 that were finalists for the prestigious Gordon Bell Prize recognizing outstanding HPC achievements. After earning bachelor’s degrees in physics and mathematics, Frontiere naturally enrolled as a Chicago graduate student to continue working with the group.
Frontiere also is the second author of one of the group’s latest papers, published last year in the Astrophysical Journal Supplement Series, that describes a cosmological simulation called the Q Continuum. The model ran on Titan, Oak Ridge National Laboratory’s Cray XK7 supercomputer, and tracks cosmological evolution from a mere 50 million years after the Big Bang to today.
The group’s workhorse code, the Hardware/Hybrid Accelerated Cosmology Code (HACC), was designed for efficiency and adaptability. It runs well even on HPC systems comprised of both standard processor cores and graphics processing units (GPUs), cousins to video game chips.
But like many cosmological codes, HACC calculates only gravity’s effects on matter. It omits other physics governing baryons, the particles that comprise all visible matter: us, the stars and planets, and everything we see. Unlike dark matter, baryons are subject to forces beyond gravity.
Baryonic physics has little impact on structure formation when simulating huge pieces of the universe, and including it greatly increases the demand for computational power, Habib says. “So typically, you just turn the baryonic effects off and you run pure gravity.”
But data from the latest astronomical missions demand more precise models. As these devices look deeper and wider into the universe, “the statistical error bars are going down, to the extent that they’re almost not there,” Habib says. “That means our ability to model becomes really important” to interpreting the data.
Frontiere’s quest to add baryonic physics to HACC began with his summer 2014 practicum at California’s Lawrence Livermore National Laboratory. Working with physicist J. Michael Owen and postdoctoral researcher Cody Raskin, he began modifying a fluid dynamics method, Smoothed Particle Hydrodynamics (SPH), for use in HACC.
SPH calculates fluid forces on baryonic particles in the simulation and interpolates those values onto neighboring particles to capture overall behavior. “It’s almost as if you’re smearing out the particles or smoothing them,” Frontiere says, thus prompting the method’s name.
Traditional SPH algorithms typically aren’t accurate enough for cosmology simulations. With Owen and Raskin, Frontiere tweaked the method to create Conservative Reproducing Kernel SPH.
The name is a clue to the researchers’ improvements: The reproducing kernel calculates forces even as the phenomena it simulates become more complex. Such kernels usually are poor at conserving, or maintaining, quantities like energy or momentum through the solution. This version corrects that. The technique is scalable and easily matches to GPUs, so it should run well on HPC systems, Frontiere says.
Read the entire article in DEIXIS, the DOE CSGF annual. [PDF, pages 6-7]
Image caption: The evolution of dark matter distribution over time, from a redshift of 4 (about 12 billion light years) to today over just a piece of the Q Continuum simulation, about 81 million parsecs by 81 million parsecs by 41 million parsecs (around 264 million light years by 264 million light years by 134 million light years). It shows the detail the simulation was able to resolve in the dark matter web. Credit: Katrin Heitmann, Nicholas Frontiere, Chris Sewell, Salman Habib, Adrian Pope, Hal Finkel, Silvio Rizzi, Joe Insley, Suman Bhattacharya. The Q Continuum simulation: Harnessing the power of GPU accelerated supercomputers. The Astrophysical Journal Supplement Series, 2015; 219 (2): 34 DOI: 10.1088/0067-0049/219/2/34.