Argonne National Laboratory
The New Paradigm of
What would happen if a magnitude 8.0 temblor wrenched the earth beneath a nuclear power plant? Can nano-scale technology be used to build a new generation laser?
From planning for the worst case to designing for the best case, there are some questions that we would like to answer without having to find out the hard way. No one wants to experience a nuclear reactor failure firsthand, or find out that the expensive nanoscale materials they designed won’t work at the macroscale. Scientists would like to be able to use computer simulations to predict behaviors of large systems. But simulating a devastating earthquake or a nanostructure assembly has been hampered by the sheer complexity of the problems.
These kinds of multi-scale simulations require tremendous computational power, says Rick Stevens, associate laboratory director for Computing and Life Sciences at the Department of Energy’s Argonne National Laboratory. Stevens leads Argonne’s advanced computing initiative, targeting the development of petaflop computing systems capable of handling the kinds of computations multi-scale problems require.
The Department of Energy has tapped Argonne to become its second Leadership Computing Facility by early 2008, when the world’s first IBM BlueGene/P class system comes on-line. The system, capable of petaflop performance, will be available for open science and engineering for applications granted computing time through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.
Stevens says the Leadership Computing program represents more than another leap in computational power. It is, quite simply, a new paradigm for doing science.
“This is different from the way supercomputing centers traditionally are run, where you might have dozens or more users on the system simultaneously,” he says. “With the Leadership Computing Systems, there might be only one or two groups using the system at any one time. But what they are doing is very hard. They might be running across the whole machine, which is always interesting, because of the sensitivity to machine ‘hiccups,’ and these big runs generate terabytes of data so real-time data management is essential. It requires a full-time team working around the clock to manage it all. Then at the end, another team helps the users analyze their data.”
The support team at Argonne will form long-term partnerships with the scientific teams as projects go to petascale on BlueGene/P and researchers begin to analyze results. But, equally important, teams of Argonne researchers will work to ensure that when the scientific applications are ready, the accompanying software will be up to the task.