### Oak Ridge National Laboratory

Investigating Infinite Sound Speed Approximations for Cloud Models

Rachel Robey, University of Colorado at Boulder

**Practicum Year:**2021

**Practicum Supervisor:**Matt Norman, Computational Climate Scientist, National Center for Computational Sciences, Oak Ridge National Laboratory

Cloud modeling is a critical area of concern for Department of Energy Grand Challenge questions in climate. While clouds must be resolved, they must be simulated at an incredibly
fast rate of about 2,000x realtime in order to achieve meaningful lengths of simulation.
Algorithmic advances in cloud modeling that maintain accuracy while increasing
throughput are critical.
My project revolved around the development of novel infinite sound speed mathematical approximations to the stratified Euler equations that govern atmospheric fluid dynamics. In cloud models, sound waves can travel up to 50-100x faster than the speed of wind. They serve to balance thermal changes with density, but beyond this role, the details of their behavior are physically insignificant. Acoustic waves are supported in the full, unapproximated equation set and any solver must respect their limitation on the time steps in order to remain numerically stable. Infinite sound speed approximations treat the equilibration performed by the acoustic modes as instantaneous to circumvent this limitation.
To numerically treat this approximation, we followed a projection approach which splits the time step operator into the evolution of the slow (advective and buoyancy) equations and the projection of the resulting momenta into a divergence-free state which would result from an infinitely fast equilibration. For an A-grid (co-locating points at the center of the finite volumes), traditional approaches using a Poisson solver for the second projection step cannot be expected to be stable. Instead, we derived a method that leverages hyperbolic PDE theory to perform this step in a stable fashion. It was crucial that the method be computationally efficient and ideally port well to GPU architectures and our work resolved around development of the method under these constraints. We eventually were able to boil down and implement an incredibly efficient direct linear solve algorithm at 1st, 3rd, and 5th order spatial accuracy (of the divergence derivative) which increasingly improved the fidelity of fine structures in a thermal bubble test case against a fully compressible comparison.

Random walk with restart on gene networks

Boyan Xu, University of California, Berkeley

**Practicum Year:**2021

**Practicum Supervisor:**Daniel Jacobson, Computational Systems Biologist, Biosciences, Oak Ridge National Laboratory

Gene networks are graphs whose nodes are genes and whose edges represent relationships between genes. For example, a co-expression network indicates whether two genes have sufficiently high correlation in their expression. Other kinds of gene networks include protein-protein interaction, metabolic pathways, regulation, and more—each captures a particular kind of relationship between genes. Our project combines multiple types of gene networks into a multiplex network in order to discover functions of uncharacterized genes. We analyze the multiplex network using computational methods such as random walk with restart, graph convolutional neural networks, and other dimensionality reduction techniques.

On developing accurate and robust krylov methods for matrix product states

Edward Hutter, University of Illinois at Urbana-Champaign

**Practicum Year:**2019

**Practicum Supervisor:**Dmitry Liakh, , Scientific Computing, Oak Ridge National Laboratory

Our project involved designing krylov methods that act directly on tensor networks, where tensor networks are low-rank factorizations of many-body tensors. One of the key challenges in this project proved to be in redesigning the basic krylov method so as to avoid intermediate states that implicitly project onto smaller-dimensional subspaces. The significance of this project is in accelerating electronic structure calculations that have previously used variational optimization or Davidsons method to calculate the ground state.

A QMC+DFT Study of the Volume Collapse Phase Transition in MnS2

Jennifer Coulter, Harvard University

**Practicum Year:**2019

**Practicum Supervisor:**Paul R. Kent, Senior R&D Staff, CPSFM/CNMS, Oak Ridge National Laboratory

Using a combination of density functional theory (DFT) and quantum Monte Carlo (QMC), in particular, diffusion quantum Monte Carlo calculations, we studied the magnetic structure, conducting phase, and volume collapse in MnS2.

Dynamics of Charged Polymers

Kevin Silmore, Massachusetts Institute of Technology

**Practicum Year:**2019

**Practicum Supervisor:**Rajeev Kumar/Bobby Sumpter, Research Scientist/Interim Director, Center for Nanophase Materials Sciences, Oak Ridge National Laboratory

Throughout the summer, I worked on various projects under the umbrella of polymer physics. The first project I worked on involved modeling the capacitance of thin films of charged polymers, which were studied experimentally by collaborators on campus. This involved numerical simulation of a previous model that had been developed by my mentor, Rajeev, and others. The second project I worked on was a collaboration with an experimental group that has synthesized bottlebrush polymers and involved field-theoretic simulations in order to understand microphase segregation. It was especially interesting to compare the resulting morphologies to those of block copolymers, which exhibit similar behavior. Finally, I did mathematical work to model the dynamical behavior of charged polymers and understand how scattering data and dielectric spectroscopy measurements are influenced by the presence of charges.

A computational approach to biodiversity change: does function follow richness?

Kari Norman, University of California, Berkeley

**Practicum Year:**2018

**Practicum Supervisor:**Alison Boyer, Dr, Environmental Sciences, Oak Ridge National Laboratory

Understanding how ecosystems are changing in response to anthropogenic pressure is essential for appropriately managing and conserving ecological systems into the future. Loss of species diversity is widely cited as the greatest threat to the stability, resilience, and functioning of ecosystems. Underpinning this assertion is the assumption that species diversity is an appropriate surrogate for the aspects of species identity and interactions that confer those properties, broadly referred to as functional diversity. However, empirical studies document a wide array of relationships between functional and species diversity, and responses of both aspects of diversity to different environmental perturbations are poorly documented. Establishing the relationship between functional and species diversity is further impeded by methodological concerns about the ability of functional diversity metrics to capture change independent of species richness (number of species). Using large-scale biodiversity data for breeding birds of North America, we take a first look at the ability of functional diversity metrics to detect change through a null model approach, assessing i) metric behavior, and ii) variation in the form of the functional diversity-species diversity relationship across biomes.

Optimizing FLASH Multipole Gravity for Exascale

Hannah Klion, University of California, Berkeley

**Practicum Year:**2017

**Practicum Supervisor:**Bronson Messer, Senior R&D Staff Member, National Center for Computational Sciences, Oak Ridge National Laboratory

FLASH is a high performance multiphysics code currently used for a wide range of astrophysical simulations. In particular, it is a commonly-used tool for the simulation of Type Ia supernovae, in which runaway fusion ignites in a white dwarf and causes it to explode. The heavy element yields of these events remain uncertain. Current simulations track about a dozen isotopes, as opposed to the thousands required to completely capture the nuclear physics of the event. With the next generations of supercomputers, we will have the computational power to greatly extend the nuclear physics in these simulations.
There is an Exascale Computing Project underway at several national labs to combine the capabilities of FLASH and other codes, and to prepare these codes for the next generation of supercomputers. The result will be a multiphysics toolkit suitable for exascale simulations of astrophysical explosions.
Simulating self-gravity accurately and efficiently is critical for modeling a Type Ia supernova, since fundamentally supernovae are competitions between energy-releasing nuclear reactions and gravity. Currently, the multipole self-gravity solver requires extensive inter-node communication, which is expensive, since the gravitational potential at any point depends on the density at every other point. As such, the default implementation requires that all nodes communicate with each other multiple times. This is expensive, so we overlap computation with communication where possible.
In order to test the new self-gravity routine, we simulate an edge-lit sub-Chandrasekhar mass Type Ia supernova. This allows us to verify that we haven't impacted the accuracy of the simulations, and to evaluate the performance of the new multipole solver.

Grain boundary stability and Li-ion diffusion across grain boundaries in lithium lanthanum titante

Kathleen Alexander, Massachusetts Institute of Technology

**Practicum Year:**2015

**Practicum Supervisor:**Bobby Sumpter, Director for the Nanomaterials Theory Institute, Center for Nanophase Materials Science Division, Oak Ridge National Laboratory

Lithium Lanthanum Titanate (LLTO) is a promising candidate as a solid-electrolyte material for Li-ion batteries due to its high Li-ion conductivity. However, the presence of grain boundaries (GBs) in polycrystalline samples of LLTO reduce the Li-ion conductivity to orders of magnitude below usable levels. GBs in LLTO have a variety of chemical composition and structure. While at ORNL, the purpose of my project was to study the relative stability of different kinds of GBs in LLTO as well as the possible mechanisms of Li-ion diffusion along and across these GBs so as to provide further insight into how we can overcome the GB resistance in this material.

Taskloaf: A library for simplifying distributed task parallelism

Thomas Thompson, Harvard University

**Practicum Year:**2015

**Practicum Supervisor:**Ed D'Azevedo, Dr., Compute Science and Mathematics, Oak Ridge National Laboratory

I began the practicum planning to apply existing dynamic distributed task parallelism tools problems in numerical linear algebra and n-body problems.
With the guidance of Dr. D'Azevedo, I investigated several options including HPX, Legion, and the Open Community Runtime. These projects are designed to support applications wishing to run on upcoming Exascale systems. As a result, they have innumerable features and options wish are unnecessary at the smaller scales at which I would like to work -- tens of thousands of extra lines of code. Furthermore, these projects are supported by a fickle research community which begins and ends projects like these on a short time frame. Depending on such a library is dangerous once support dissipates.
I wasn't convinced that the problem was inherently complex enough to require the complexity present in existing libraries, so I set out to write a new distributed task-parallelism, Taskloaf. The primary goal with Taskloaf was to produce a very small code base while maintaining generality. I use the futures monad where tasks and task dependencies are described with a small vocabulary of verbs like "then" (schedule one task after another) and "ready". I use a data-driven work-stealing scheduling system to balance memory transfer with load balancing.
Taskloaf is not complete, but it's getting close. The practicum kick-started the project and I continue to develop it. The codebase is <1500 lines of code so that a skilled C++ programmer can learn the inner workings of the entire library in a couple days. That can be compared to 100,000 for HPX, the most similar existing project. And, since the vocabulary of Taskloaf can describe arbitrary task graphs (nodes are tasks, edges are dependencies), I have succeeded in maintain broad applicability. Finally, tasks can be quite fine-grained. The scheduling system is able to run hundreds of thousands of tasks per second per core.
Someday, perhaps, distributed task-parallelism will be as easy as a "#pragma omp parallel for" is for shared memory OpenMP. We're not there yet, but that's my goal.

Petascale Simulations of Lignin, Cellulose, and Cellulases

Joshua Vermaas, University of Illinois at Urbana-Champaign

**Practicum Year:**2014

**Practicum Supervisor:**Jeremy Smith, Director and Governor's Chair, Center for Molecular Biophysics , Oak Ridge National Laboratory

We were looking for different ways of analyzing a petascale (24M atoms) molecular dynamics trajectory of lignin, cellulose, and cellulases to understand how lignin interferes with cellulase activity.We were looking to analyze

Verification of Quantum Analog Digital (QuAD) Systems

Sarah Loos, Carnegie Mellon University

**Practicum Year:**2013

**Practicum Supervisor:**Ryan S. Bennink, Research Scientist, Cyberspace Sciences and Information Intelligence, Oak Ridge National Laboratory

Modern society is shaped by the ability to transmit, manipulate, and store large amounts of information. Although we tend to think of information as abstract, information is physical, and computing is a physical process. How then should we understand information in a quantum world, in which physical systems may exist in multiple states at once and are altered by the very act of observation? This question has evolved into an exciting new field of research called Quantum Information.
The design and analysis of dynamical systems with tightly coupled quantum, analog, and digital elements—called QuAD systems—presents novel challenges and requires corresponding advancements in theories and tools. To that end, Bennink et al. have introduced the notion of a QuAD automaton and established some of its general properties. The QuAD automaton may be seen as a natural extension of systems with analog (continuous) and digital (discrete) components, called hybrid systems.
In this practicum, we investigated new logics to provide rigorous and symbolic methods for quantum computing devices. While there exists a logic (called differential dynamic logic [1]) which has had success verifying classical hybrid systems, there is currently no such logic which could be used to verify or even describe how the additional complexities inherent in computing at the quantum level interact with the analog and digital components. Quantum devices are highly probabilistic, and the simple act of taking a measurement affects the state of the system. It is impossible to isolate qubits from their environment, and as a result, quantum computers decay from a given quantum state to an incoherent state. Such properties make quantum devices challenging verification problems and more importantly make their behavior difficult for humans to understand, making correct design nearly impossible without computer-aided design tools.
[1] A. Platzer. Differential dynamic logic for hybrid systems.

The sustainability of cellulosic biofuel crop produciton in a changing climate

Jamie Smedsmo, University of North Carolina

**Practicum Year:**2013

**Practicum Supervisor:**Henriette Jager, Research Scientist, Environmental Sciences Division, Oak Ridge National Laboratory

The goal of this study was to evaluate the sustainability of biofuel crop production in the context of climate change. The specific biofuel crop considered was switchgrass (Panicum virgatum L.), a crop which may provide an economically feasible feedstock for cellulosic biofuel production in the south-central US according to the Billion Ton Update(US Department of Energy, 2011). We assessed the impact of a large-scale shift in agricultural production from winter wheat, a shallow-rooted, annual row crop, to switchgrass, a deep-rooted perennial grass.

Use of Multi-sphere method to build discrete element model of granular materials from tomographic imagery

Andrew Stershic, Duke University

**Practicum Year:**2013

**Practicum Supervisor:**Srdjan Simunovic, Senior Research Staff, Computer Science and Math Division, Oak Ridge National Laboratory

The project was in the modeling of sand behavior under impact loading. The work group, consisting of a professor at the University of Tennessee and research staff at ORNL, seeks to build a detailed model of granular material behavior starting from high-quality tomographic imagery to characterize the geometry and arrangements of the particles. The geometry is used to build computational models using the discrete element method and the finite element method, and the models will be validated by comparing against fundamental physical experiments. Then, the models will be applied to the impact problem and fine-tuned by comparing against data from impact experiments.

Numerical Solution of PDE's with Evolving Boundaries in a High Order Multiresolution Framework

Jeffrey Donatelli, University of California, Berkeley

**Practicum Year:**2011

**Practicum Supervisor:**George Fann, Ph.D., Computer Science and Mathematics Division, Oak Ridge National Laboratory

Simulating problems with multi-scale physics has served as a major
obstacle to scientific computing. The standard numerical approaches are
complicated by the vast range of length and time scales required.
MADNESS(Multiresolution ADaptive Numerical Environment for Scientific
Simulation) is a pseudo-spectral code that utilizes multiwavelet bases
which allows for a high-degree of accuracy, has built-in adaptivity, and
is computationally efficient. It has been used extensively in fields such
as chemistry, electronic structure, and nuclear physics. However, this methodology has been limited by the prescribed boundary conditions and geometry, whose proper treatment is essential if one wishes to use this framework for a wider class of problems. My work has involved addressing the inherent theoretical issues and developing high performance massively parallel software to track the evolution of implicitly defined boundaries and solving PDE's within those boundaries inside the MADNESS framework.

High Performance Stencil Computations on GPUs

Samuel Skillman, University of Colorado

**Practicum Year:**2011

**Practicum Supervisor:**Wayne Joubert, Computer Scientist, OLCF (Oak Ridge Leadership Computing Facility), Oak Ridge National Laboratory

The main goal of this project was to construct high performance kernels to be used in stencil computations on graphical processing units (GPUs). Stencil computations are used in many scientific applications including finite difference schemes and multigrid solvers. During this practicum, I developed several stencil computations to be used in a multigrid solver, which is still being developed. We were able to utilize the GPU to reach close to peak bandwidth.

Preparing the FLASH code for large-scale simulation

Paul Sutter, University of Illinois at Urbana-Champaign

**Practicum Year:**2010

**Practicum Supervisor:**Rebecca Hartman-Baker, Computational Scientist, Scientific Computing Group, Oak Ridge National Laboratory

We have made many improvements to the FLASH code to enable it to scale well on current- and next-generation supercomputers. Specifically, we optimized the communication algorithm of the in-line halo finder, an essential tool for analysis in cosmological simulations. The optimizations allowed us to use the halo finder at much larger problem sizes and distributed over many more processors than before. Also, we adapted the FLASH I/O routines to support reading and writing using the ADIOS library. The I/O improvements lead to a ~40x speedup in reading and writing checkpoints.

Isotopic depletion in three-dimensional neutron transport

Joshua Hykes, North Carolina State University

**Practicum Year:**2009

**Practicum Supervisor:**Kevin Clarno, R&D Staff, Computational Nuclear Engineer, Nuclear Science and Technology Division, Oak Ridge National Laboratory

The project was the implementation of isotopic depletion in a three dimensional deterministic radiation transport code. Tracking the isotopic composition of the simulated materials is important in modeling nuclear reactors.

Vertical Sensitivity Fix for CAM Shallow Moist Convective Parameterization Scheme

Matthew Norman, North Carolina State University

**Practicum Year:**2009

**Practicum Supervisor:**Dr. John Drake, Computational Earth Sciences Group Leader, Computer Sciences and Mathematics Division, Oak Ridge National Laboratory

The shallow moist convective parameterization scheme in the Community Atmosphere Model (a component of the Community Climate System Model) has been documented to show a strong sensitivity to the vertical grid spacing. The sensitivity is so strong that physically unrealistic atmospheric simulations arise from vertical grids finer than 26 levels.
We discovered the cause to be the 3-layer model of detection and adjustment in the scheme. If the vertical grid spacing is decreased, the scales of detection and adjustment in the scheme decrease as well and no longer work on physical spatial scales. We decoupled the vertical grid of the parameterization scheme from the model vertical grid via remapping and performed the adjustment on physical spatial scales. Results supported the hypothesized reason for the sensitivity, and the sensitivity was greatly reduced after applying the fix.

A conditional-strategy model accounts for spatiotemporal life-history variation in Snake River fall Chinook salmon

Alex Perkins, University of California, Davis

**Practicum Year:**2009

**Practicum Supervisor:**Yetta Jager, , Environmental Sciences, Oak Ridge National Laboratory

Fall Chinook salmon typically migrate to the ocean as age-0 subyearlings, but the appearance of a strategy whereby juveniles residualize in freshwater and migrate to the ocean as age-1 yearlings has arisen over the past few decades in Idaho's Snake River population. The yearling life history appears with varying frequency in different river reaches and years, and its recent emergence has conservation implications for this threatened population because of survival and reproductive differences between the two life histories. Temperature differences are thought to play some role in accounting for variation in the numbers of yearlings observed among reaches and years, but understanding of how juveniles make decisions about which life history to pursue is lacking. We advance a hypothesis for the mechanism by which juveniles make this decision, formalize it with a model, and present the results of fitting this model to life-history variation data. The model's simulation output captures patterns of life-history variation among reaches and years and appears robust to uncertainty in a key unknown parameter. The results also shed light on the ways juveniles make decisions about which life history to pursue and suggest directions for future empirical research. Finally, the model offers those interested in the management and conservation of Snake River fall Chinook salmon
a useful tool to account for life history variation in population viability analyses and decision making.

Electrostatics and Electrodynamics with Multiresolution Methods

Matthew Reuter, Northwestern University

**Practicum Year:**2009

**Practicum Supervisor:**Robert J. Harrison, Technical Group Leader, Computer Science and Mathematics, Oak Ridge National Laboratory

Multiresolution methods provide convenient ways to simulate systems with disparate length scales and/or complicated structures. Specifically, non-uniform grids complicate finite difference methods and mesh generation can be both problem-specific and expensive for finite element methods. The multiwavelet basis used in multiresolution methods permits adaptive resolution as needed in the domain, and also efficient algorithms for applying integral operators.
My work at Oak Ridge National Laboratory focused on extending the "Multiresolution Adaptive Numerical Environment for Scientific Simulation" package (MADNESS) to solve both the time-independent and time-dependent versions of Maxwell's equations (electrostatics and electrodynamics, respectively). This work also required the implementation of "interior boundary conditions" to specify physical data on known domains inside the computational domain.

A performance-based load balancing scheme for MADNESS

Paul Sutter, University of Illinois at Urbana-Champaign

**Practicum Year:**2009

**Practicum Supervisor:**Robert Harrison, , Computational Chemical Sciences Group, Oak Ridge National Laboratory

We developed a novel load-balancing algorithm for MADNESS (Multiresolution
Adaptive Numerical Environment for Scientific Simulation). Computations in
MADNESS generate octrees with millions of nodes distributed across tens or hundreds of thousands of processors, making load balancing crucial for
performance. This scheme monitors the computational load on each tree node,
allowing adaptive balancing of the octree based on actual performance data
as the simulation progresses.

Computational Studies of Polymers for Photovoltaics and Solid-State Lighing

Jack Deslippe, University of California, Berkeley

**Practicum Year:**2008

**Practicum Supervisor:**Fernando Reboredo, Dr., Materials Science & Technology Division, Oak Ridge National Laboratory

Engineering effective materials for future solar cell applications involves optimizing many material properties: a material must capture light in the energy range of the solar spectrum, have a usable interface for extracting the energy and contain a high enough mobility for the excited electrons/holes to reach the interface. Computational science provides an excellent avenue to explore large classes of materials for potential applicability and also to run detailed calculations on optimizing current promising candidates. In my practicum at Oak Ridge, we calculated in detail the photovoltaic properties of a particular polymer system PVK (Poly(N-vinylcarbazole) (C42-H33-N3)) that has shown promise for solid-state lighting and solar cell applications. This system represents a large theoretical and computational challenge due to its extremely large size (over 100 atoms per unit cell) and large spatial distribution of atoms. A lot of effort was placed into improving the current computational techniques for studying these properties of PVK and other future nanosystems and systems that could not have been studied under existing methodology.

Modeling the Diffusive Effects of Shock and Detonation wave Reflections

John Lewis Ziegler, California Institute of Technology

**Practicum Year:**2008

**Practicum Supervisor:**Ralf Deiterding, Dr., Computational Science and Mathematics Division, Oak Ridge National Laboratory

In the summer project at ORNL, we extended the AMROC (adaptive mesh refinement in object-orientated C++) software framework, to the simulation of diffusive shocks and detonation waves. AMROC is also used actively for detonation and shock-driven fluid-structure interaction simulation, both at the Department of Energy Advanced Scientific Computing Center at the California Institute of Technology and at ORNL. The software had already contained a number of shock-capturing schemes for reactive supersonic flows and had been applied extensively to simulating detonation waves with the Euler equations. For a more physically correct simulation of shock and detonation waves, we extended the methods from the reactive Euler to the diffusive Navier-Stokes model. We resolved and studied the diffusing effects in the flow, in particular for the shock/detonation wave two-dimensional wedge interaction problem. The main configuration was for Mach 4.5 shock and detonation waves at shock surface angles that produced double Mach reflections. The double Mach reflection is a fundamental structure influencing the propagation of detonation waves. Here, hydrodynamics, viscous shear, combustion, and mass transport interact at scales which create a computationally difficult problem. For this configuration we also investigated shock transition criterion and compared the robustness of the more classical finite volume methods with that of a hybrid WENO-TCD finite difference method.

Discovery of Gene Circuit Architecture from Noise Correlations

Natalie Cookson, University of California, San Diego

**Practicum Year:**2007

**Practicum Supervisor:**Michael Simpson, Principal Investigator, Molecular-Scale Engineering and Nanoscale Technolo, Oak Ridge National Laboratory

I participated in an ongoing research program at the Oak Ridge National Lab and the University of Tennessee that integrates modeling, analysis, simulation and experimental techniques to investigate the relationship between patterns of gene expression noise and the architecture and parameters of the underlying gene circuit. Gene expression noise is defined as the stochastic deviations from mean protein expression level and originates from the discrete nature and random timing of the regulation, transcription, translation and decay events that control a gene's expression level. To fully characterize the noise, one must consider the relationships between noise magnitude (how large are the deviations from mean behavior?), noise autocorrelation (how long do the deviations last?), and the mean protein expression level of the gene. A 3-dimensional "noise map" emerges when these three measurements (mean expression level, noise magnitude, and noise autocorrelation) are considered together. An environmental or physiological perturbation resulting in a change in a gene's mean expression level also results in a change in the gene's position on the noise map. The vector describing this change in position on the noise map depends on the architecture of the circuit controlling its expression level. The theoretical dependence of these vectors on the gene circuit architecture and parameters can be analyzed using a "frequency domain analysis" technique developed by the ORNL research group that I joined. In the case of more complex circuits, the vectors can be predicted by simulation. Experimentally, the three map coordinates can be measured by analysis of the intensity of a fluorescent reporter protein (such as GFP) in cell images obtained from time-lapse microscopy. During my practicum, I learned about the experimental and theoretical techniques used by this research group and applied them to experimentally validate some of the theoretical predictions of the noise maps.

BIO014: Next generation simulations in biology

David Rogers, University of Cincinnati

**Practicum Year:**2007

**Practicum Supervisor:**Pratul Agarwal, Staff Scientist, Computational Biology Institute, Oak Ridge National Laboratory

This project aims to study the dynamics of the antiporter behavior of the bacterial chloride channel homolog ClC-ec1. The impact of the recently identified tyrosine switch on the central chloride binding site was investigated using umbrella sampling and integrated into a larger view of the channel as a whole by kinetic modeling. The combination of these two methods has provided insight into the unique mechanism of this antiporter.

Investigating Alternative Explosive Mechanisms for Type II Supernovae Using the FLASH Code

John ZuHone, University of Chicago

**Practicum Year:**2006

**Practicum Supervisor:**Bronson Messer, Postdoctoral Research Associate, Physics, Oak Ridge National Laboratory

Type II supernovae are explosions of stars several to dozens of times more massive than the Sun which occur following the collapse of the core of the star due to lack of pressure support. Nearly twenty years of simulations have attempted to discern the exact physical mechanism for driving the supernova explosion with limited success. Specifically, the energy gained from the "bounce" of the core has proven to be not enough, as well as the energy from neutrino transport. Adam Burrows et al. (Burrows et al. 2006, A New Mechanism for Core-Collapse Supernovae Explosions) have proposed an alternative mechanism for driving the explosion relying on acoustic power in the form of gravity waves that are driven by oscillations in the inner core. My project was to artificially generate such waves in such an model in 3D to see if such an explosion could be produced.

Mass Lumping in CENTRM

Teresa Bailey, Texas A&M University

**Practicum Year:**2005

**Practicum Supervisor:**Kevin Clarno, R & D Staff, Reactor Analysis, Nuclear Science and Technology Division, Oak Ridge National Laboratory

In order to simulate nuclear reactors, a neutron transport equation must be solved. This transport equation is dependant on time, energy, angle, and space, with each variable being discretized separately. At ORNL a reactor analysis software package called SCALE has been developed to simulate various neutronic parameters within a nuclear reactor. CENTRM is a piece of SCALE which focuses on developing a energy dependent flux to use as a weighting function to average important physic parameters such as cross sections. CENTRM generates a piecewise continuous flux spectrum by accumulating the source of neutron into each energy point of interest and distributing the source to its correct energy. CENTRM develops a highly accurate flux spectrum for complex systems with multiple materials, multiple material regions, and detailed energy data. Because the CENTRM calculations are so complex, they require a lot of computation time relative to other parts of the SCALE package.

Support of the LandScan Population Mapping Project

David Potere, Princeton University

**Practicum Year:**2005

**Practicum Supervisor:**Budhu Bhaduri, Group Leader, Oak Ridge National Laboratory, Oak Ridge National Laboratory

The LandScan Project is designed to build maps of global population distributions by fusing census data with ancillary data, including: land cover, topography, high resolution satellite imagery, and transportation networks.

Analysis of Soil Moisture in the CCSM2 global climate model

Matthew Wolinsky, Duke University

**Practicum Year:**2003

**Practicum Supervisor:**David J. Erickson, Senior Research Staff Member, Computer Science and Mathematics Division, Oak Ridge National Laboratory

I analyzed soil moisture dynamics in a coupled atmosphere-ocean global climate model, the Community Climate Simulation Model v. 2 (CCSM2). A control run of the model was perturbed with a negative soil moisture anomaly (dry soils), and run for 10 years of simulation time. Comparison of the control and anomaly runs gave indications that the soil moisture evolution algorithms were not satisfactory, prompting an investigation into causes and remedies for this problem.

Electronic structure, methods and applications: Solving a sparse generalized eigenvalue problem and permalloy.

Kristopher Andersen, University of California, Davis

**Practicum Year:**2002

**Practicum Supervisor:**Dr. William Shelton, Senior Research Staff Member, Computer Science and Mathematics, Oak Ridge National Laboratory

During the practicum I spent roughly a third of my time studying the underestimate, by theory, of the resistivity in permalloy using first principles electronic structure methods. The rest of the time, I spent implementing a method to solve a generalized eigenvalue problem and comparing it to other methods. The specific problem I'm interested in solving comes from using a finite-element basis set in a density functional theory calculation, which would allow one to study much larger systems from first principles.

Spatial and temporal coarse-graining techniques for molecular simulations

Ahmed Ismail, Massachusetts Institute of Technology

**Practicum Year:**2002

**Practicum Supervisor:**William Shelton, Group Leader, Computational Condensed Physics Matter Group, Oak Ridge National Laboratory

The key to studying the long-time dynamics of molecular systems is to overcome the severe limitations posed by current methods for treating Newton's equations of motion, Our goal in this project was to explore several new methods for determining the appropriate spatial and temporal scales for the coarse-graining of various molecular systems, including lattice models and linear polymer models. The primary methods were developed from the principle that pair-distribution and site-site correlation functions can be used to determine upper bounds on the length scale of coarse-grained objects, while time-correlation functions can determine 'natural' frequencies for various types of motion in molecules. Incorporating these measurements into existing molecular simulations can lead to improvements of several orders of magnitude in the size of systems which can be simulated, and more importantly, in the length of time that can be studied within constraints of CPU time.

A Critical Examination of the Practicality of Adjoint Monte Carlo Transport for External Beam Radiation Therapy Inverse Treatment Planning

Michael Kowalok, University of Wisconsin

**Practicum Year:**2002

**Practicum Supervisor:**John C. Wagner, Research Staff Member, Nuclear Science and Technology, Oak Ridge National Laboratory

Forward and adjoint transport methods may both be used to determine the dosimetric relationship between source parameters and individual tissue elements (e.g. voxels within a patient). Forward transport methods consider one specific tuple of source parameters and calculate the response in all voxels of interest. One such calculation must be performed for each combination of source parameters. Adjoint transport methods, conversely, consider one particular voxel and calculate the response of that voxel as a function of all possible source parameters. In this regard, adjoint methods provide a source parameter sensitivity analysis in addition to a dose computation. This information can be used as the basis for an inverse treatment planning process which seeks the best combination of source parameters to deliver a prescribed dose distribution.
This project examined the practicality of using adjoint Monte Carlo transport, as compared to conventional forward transport, for determining in a general manner the relationship between source variations and dose in a patient. Comparisons were made in terms of consistency of results, ease of use, and the level of computational effort required to achieve various types of results. The possibility of developing variance reduction techniques was discussed for both methods.

Computational Science at the Terascale

Collin Wick, University of Minnesota

**Practicum Year:**2001

**Practicum Supervisor:**Peter T. Cummings, Distinguished Scientist, Chemical Technology Division , Oak Ridge National Laboratory

Computational Science at the Terascale was the name of the project in which I participated in. The project involved developing and using potential models for classical simulations of molecules on a gold surface, and the evaluation of the molecules' conductance between two gold surfaces. The purpose for these calculations were to eventually be able to predict the ability of many molecules to be used in molecular electronics.

Parallelizing Fish Population Models

Kevin Glass, University of Oregon

**Practicum Year:**1997

**Practicum Supervisor:**Dr. Kenny Rose, , Environmental Sciences Division, Oak Ridge National Laboratory

The project was designed to examine alternatives for parallelizing existing fish population simulation code. A major thrust of this effort was to minimize the alteration of the original code to determine the efficacy of developing parallel code from existing models. We examined three alternatives, automatic parallelization, the use of data parallel languages and the use of MPI.

Use of Wavelet Theory to Study Turbulent Transport in Fusion Reaction

Corey Graves, North Carolina State University

**Practicum Year:**1997

**Practicum Supervisor:**Dr. David Newmar, , Fusion Energy, Oak Ridge National Laboratory

Used wavelet decomposition based techniques to try to extract important features from a signal created by turbulence of the Fusion Process in a reactor.

A Modified Eight-node Crack-tip Element

Charles Gerlach, Northwestern University

**Practicum Year:**1996

**Practicum Supervisor:**Dr. Len Gray, , Energy and Math, Oak Ridge National Laboratory

In a published paper, Len Gray and a colleague of his proved that the near-tip displacement field along the edge of the crack had no linear component. The goal of the project was to see whether a modified crack-tip element that had no linear component would improve the accuracy of the K1 and K2 calculations.

Model for Coherent Modes in Tokomak Discharges with Weak Negative Central Shear

Eric Held, University of Wisconsin

**Practicum Year:**1996

**Practicum Supervisor:**Dr. Jean-Noel Leboeuf, , Fusion Energy, Oak Ridge National Laboratory

I performed nonlinear calculations using a reduced version of the magnetohydrodynamic (MHD) code FAR to investigate the bifurcation of coherent peaks in the frequency spectrum of the density fluctuations observed during weak, negative central shear discharges in D-III-D.

Characterization of Effect of Loading of Power Supplies on Injection Plugs on Laboratory Microchip

Rajeev Surati, Massachusetts Institute of Technology

**Practicum Year:**1996

**Practicum Supervisor:**Dr. Michael Ramsey, , Chemical and Analytical Sciences Division, Oak Ridge National Laboratory

The purpose of the project was to provide me an opportunity to work with and on microfabricated devices as well as all aspects of laser detection. I learned a lot about working in a chemistry laboratory. I think that it really enhances my understanding of the problems faced and what is worth simulating.

Stabilization Using Artificial Neural Networks

Sven Khatri, California Institute of Technology

**Practicum Year:**1995

**Practicum Supervisor:**Dr. Vladimir Protopopescu, , Computer Science & Math Division, Oak Ridge National Laboratory

The idea was to reformulate the control synthesis methodology of Levin and Norendra so as to make the analysis less conservative and to improve performance by switching from an n-step definition of performance to i-step.

Development of a Higher Order Polynomial Nodal Code in Cylindrical Geometry

Brian Moore, North Carolina State University

**Practicum Year:**1994

**Practicum Supervisor:**Dr. Jess Gehin, , Engineering Physics and Mathematics Division, Oak Ridge National Laboratory

The code resulting from this analysis enables faster, higher order solution for the engineering design of the Advanced Neutron Source (ANS) being conducted at ORNL and other national laboratories. This product greatly aids neutronic simulation capability (much reduced computing time) for the group.

Error Analysis of the Nodal Expansion Method in One Dimensional Cartesian Geometry

Chris Penland, Duke University

**Practicum Year:**1994

**Practicum Supervisor:**Dr. Yousry Azmy, , Engineering Physics and Mathematics Division, Oak Ridge National Laboratory

As part of an ongoing effort to develop an adaptive mesh refinement strategy for use in a state-of-the-art nodal nuclear reactor kinetics code, we developed a prior error bounds for the quartic nodal expansion method (NEM) in one-dimensional cartesian geometry. This serves as both encouragement that our goal of fully adaptive mesh refinement can be reached and as a launching pad for similar progress in multiple spatial dimensions. Pleasingly, the error bounds derived approximate the true value of the error for the test problem we chose.

1-GHz Correlation Processor for Single-Bit Data

Amanda Duncan, University of Illinois at Urbana-Champaign

**Practicum Year:**1993

**Practicum Supervisor:**Dr. Gary Alley, , Monolithic Systems Development, Oak Ridge National Laboratory

The purpose of this project is to implement a correlation processor for single bit data streams that can operate at a frequency of a GHz. The implementation used is about 3 orders of magnitude faster than previous schemes for calculating the correlation function. THe design consists of a high speed sampling system, an array of and gates and counters and a host computer.

Electron Self-energy in Simple Metals

Clifton Richardson, Cornell University

**Practicum Year:**1992

**Practicum Supervisor:**Dr. Gerald Mahan, , Solid State Division, Oak Ridge National Laboratory

We have calculated the band narrowing and the effective mass of electrons in a three-dimensional electron gas.