### Los Alamos National Laboratory

Who Lives in a Spore?

Olivia Asher, University of Georgia

**Practicum Year:**2023

**Practicum Supervisor:**Aaron Robinson, Staff Scientist II, Bioscience Division, Los Alamos National Laboratory

Arbuscular mycorrhizal fungi (AMF) live inside of plant roots and give the plant phosphorous and sometimes nitrogen in exchange for carbon. These fungi have potential to supplement or replace fertilizers, especially in bioenergy crops, like sorghum. AMF also contain endobacteria, specifically Mollicutes related endobacteria (MRE). Because some AMF and MRE may promote plant growth more than others, before using AMF to promote plant growth in bioenergy sorghum, we must document the identities and genomic characteristics of AMF and their MRE which associate with sorghum. Since AMF live inside of plants, it is challenging to 1) isolate AMF individuals and 2) identify microbes which associate specifically with AMF. To combat these challenges, we are developing a workflow to sequence and analyze metagenomes of AMF and their endobacteria from single AMF spores isolated from bioenergy sorghum fields in Arizona. This method allows us to isolate sequences from individual AMF and their MRE for downstream genomic analysis.

Mapping magnetic fields in star-forming regions with deep learning

Nina Filippova, University of Texas at Austin

**Practicum Year:**2023

**Practicum Supervisor:**Hui Li, Scientist, T-2, MS B227, Los Alamos National Laboratory

Magnetic fields shape the dynamics of star-forming regions, but are notoriously difficult to characterize in observations. The anisotropic nature of magnetized turbulence suggests a new method for mapping magnetic fields based on the orientation of velocity gradients in spectroscopic data. Motivated by the success of the velocity gradient technique, we designed a convolutional neural network (CNN) for predicting the magnetic field orientation from observations of 2D intensity and velocity data.

Building viral-immune models to better understand HIV control post-ART

Nicole Pagane, Massachusetts Institute of Technology

**Practicum Year:**2023

**Practicum Supervisor:**Alan Perelson, Dr., Theoretical Biology and Biophysics, Los Alamos National Laboratory

The current treatment for HIV is antiretroviral therapy (ART). When ART is stopped, a patient's viral load will generally rebound within a few weeks to high pretreatment levels, or---in some rare individuals called post-treatment controllers (PTCs)---stay below a critical value for months. Thus, although ART can functionally treat people living with HIV, it is by no means a cure for most. To increase the number of patients that can become PTCs, recent work has focused on coupling ART with different immunotherapies to boost a person's immune system so that they can control HIV when ART is stopped. In several clinical trials, these combined therapies seem to not only induce the rare control or typical rebound of the virus, but to also induce a third oscillatory response with varying amplitudes and periods. To gain a better mechanistic understanding of the viral-immune dynamics, we develop a model of the virus and host immune system with features such as cytotoxic and non-cytotoxic effects on the viral load, immunological memory, and immune exhaustion. We fit the model to various post-ART clinical trial datasets to assess its generalizability and work toward understanding how certain parameters dictate the viral-immune dynamics in different individuals.

Evaluation of modeled subglacial discharge from the Antarctic Ice Sheet to the Southern Ocean

Courtney Shafer, University at Buffalo

**Practicum Year:**2023

**Practicum Supervisor:**Matt Hoffman, Scientist III, T-3, Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory

While at Los Alamos, I used the subglacial hydrology model within the MPAS-Albany Land Ice model (MALI) to generate a Antarctic-wide map of subglacial water flux. The project involved creating various configurations of subglacial hydrology by tuning different parameters that control water flow at the base of the ice sheet, and choosing a subglacial flux map that was as close to reality as possible. The fidelity of the maps generated were tested against real-world observations of localized ice shelf melting (is the modeled subglacial discharge being routed to ice shelves in expected ways?) as well as against observations of subglacial lakes (is the modeled subglacial hydrologic system across Antarctica being pressurized correctly to resolve these subglacial lake features?). Because we used a simplified version of the subglacial hydrology model (water at the base of the ice sheet was assumed to be a distributed sheet of size h) as opposed to a more realistic model using channels that are incised into the ice to route water efficiently, we were unable to reproduce subglacial lake features. Future practicum will involve improving subglacial flux maps using updated channel model as well as creating quantitative analysis methods to compare subglacial flux with ice shelf melt rates and subglacial lake locations.

Using Large Language Models to Understand Online Narratives

Michael Tynes, University of Chicago

**Practicum Year:**2023

**Practicum Supervisor:**Geoffrey Fairchild, Dr., Analytics, Intelligence, and Technology, Los Alamos National Laboratory

Online misinformation has increasingly large impacts on global affairs. For example, recent research shows that exposure to COVID-19 related misinformation both reduced vaccination intent and disrupted overall pandemic response efforts. My practicum leveraged Large Language Models (LLMs) a recent machine learning model family that has achieved record performance in several language processing tasks to improve capabilities for understanding online narrative evolution. This included using embedding models coupled with clustering algorithms to understand topics and using generative language models, fine-tuned classifiers for identifying narrative themes, and retrieval augmented generation to interrogate large corpora of internet documents. This work laid the foundation for continued method development and evaluation at Los Alamos.

Optimizing Enzymes for Digesting Plastics

Olivia Asher, University of Georgia

**Practicum Year:**2022

**Practicum Supervisor:**Hau Nguyen, Scientist, Bioscience Division, Los Alamos National Laboratory

Plastic waste proliferates for thousands of years, negatively impacting environmental and human health. Ideally, we could utilize the benefits of plastic without producing harmful waste. To do this, we must find new methods of breaking down plastics. Several plastic-digesting enzymes found in bacteria have been identified and characterized. The most well-known of these enzymes are PETase LCC, and PHL7, which break down polyethylene terephthalate (PET), the type of plastic used in most food packaging. In this project, we use directed evolution to improve enzyme activity, expression and thermostability of LCC enzymes. The enzymes we create have potential to digest post-consumer plastic waste in the future.

Hydrologic simulations of agricultural tile drainage using Advanced Terrestrial Simulator (ATS)

David Rogers, Stanford University

**Practicum Year:**2022

**Practicum Supervisor:**David, Moulton, Earth and Environmental Sciences, Los Alamos National Laboratory

The goal of this work is to incorporate the effects of agricultural tile drains into a processed-based terrestrial model. Agricultural tile drains are important because they prevent the root zone from becoming overly saturated, but they can also allow nutrients from fertilizers to runoff from the field and into streams. These excess nutrients have been a primary cause of toxic algal blooms in the Great Lakes and Gulf of Mexico. The end goal of this project is not to explicitly model each tile drain in a watershed, but to understand the aggregate behavior of tile drains so that they can be implicitly incorporated into watershed-scale models.

Neutron Star Crust Tracking in Dynamical Spacetime

Gabriel Casabona, Northwestern University

**Practicum Year:**2022

**Practicum Supervisor:**Oleg Korobkin, R&D SCIENTIST, Center for Theoretical Astrophysics, Los Alamos National Laboratory

Neutron stars (NS) are the densest objects in the known universe. Their basic structure consists of neutron-degenerate fluid encapsulated by a relatively thin crust. Recent astronomical observations of gamma-ray and x-ray bursts have led some astronomers to theorize that these may be caused by the cracking of the crust of a NS in a binary system. Since NS are both highly massive and dense, they experience relatively high gravitational perturbations from their companions. The goal of this project is to model and simulate the role that these perturbations have on the NS crust, specifically looking at what happens when the gravitational perturbations matches the resonance frequency of the crust.

Directionally-unsplit hydrodynamics solver for the astrophysics code LEAFS

Alexandra Baumgart, California Institute of Technology

**Practicum Year:**2021

**Practicum Supervisor:**Samuel Jones, Dr., X Computational Physics (XCP), Los Alamos National Laboratory

The project entailed writing a directionally-unsplit hydrodynamics solver for the finite volume code LEAFS, which is used for astrophysical simulations including supernovae. The hydrodynamics module includes multiple options for reconstructions, approximate Riemann solvers, and time integration schemes. A program with standard test problems was written to validate the new hydrodynamics solver.

Particle transport and acceleration in magnetic reconnection

Grant Johnson, Princeton University

**Practicum Year:**2021

**Practicum Supervisor:**Fan Guo, Staff Scientist, Theoretical Division, Los Alamos National Laboratory

This practicum focused on particle transport and acceleration in magnetic reconnection regions. Previous 3D simulations have found that particle transport via turbulent, chaotic magnetic fields is critical for the formation of power-law distributions in nonrelativistic and mild-relativistic simulations. In 2D, particle transport is restricted because the particles are tied to their original field lines. This project addressed this discrepancy by added ad-hoc scattering and diffusion to 2D PIC simulations of reconnection to mimic this 3D transport process and showed that 3D physics could be reasonably reproduced by 2D simulations with scattering. This work provides a pathway which enables both larger and less expensive exploration of 3D magnetic reconnection by replacing the need to resolve the 3rd spatial dimension.

Variational algorithms for quantum computers of today and the future

Nicholas Ezzell, University of Southern California

**Practicum Year:**2021

**Practicum Supervisor:**Andrew Sornborger, Research Scientist, Information Sciences CCS-3, Los Alamos National Laboratory

While at Los Alamos National Laboratory (LANL), I worked on three projects that all involved creating and testing variational quantum algorithms (VQAs). At a high level, a VQA is simply a means to solve an optimization problem that is intentionally designed to work well on quantum computers. We focused on the problems of (1) finding simple to describe approximations of complicated quantum states (2) simulating quantum systems beyond the typical coherence time limits of quantum computers and (3) finding low dimensional representations of quantum dynamics. In a sense, each project has the same goal which is to use a VQA in order to simplify a quantum object (a state, Hamiltonian, etc...) that can then be used in a larger algorithm. In this way, our algorithms can (and successfully were) implemented on current quantum hardware, but they can still be useful when better hardware is built.

Seismic inverse analysis with automatic differentiation and a variational autoencoder

Sarah Greer, Massachusetts Institute of Technology

**Practicum Year:**2021

**Practicum Supervisor:**Daniel O'Malley, Scientist, Earth and Environmental Sciences, Los Alamos National Laboratory

My primary research is in seismic inversion, which involves inverting wavefield measurements recorded at the surface of the Earth to produce a subsurface velocity model. Two major techniques for inversion, of which I actively am trying to improve on in my thesis research, are full waveform inversion (FWI) and reverse-time migration (RTM). Both FWI and RTM rely on the adjoint state method, which is a method of computing the model update in an optimization scheme. Automatic differentiation natively fits within the scheme of the adjoint state method since it allows for the calculation of the model update without explicitly having to develop the adjoint operator, which is a major inconvenience in FWI and RTM implementation. In theory, the results using automatic differentiation and the pure adjoint state method should produce the same final result, with the notable exception that the adjoint field need not be directly calculated. This past summer, I successfully implemented reverse-time migration using an automatic differentiation framework, and got promising results. The natural next step would be to incorporate regularization into the inversion scheme.

Nuclear Material Inference as an Inverse Problem

Peter Lalor, Massachusetts Institute of Technology

**Practicum Year:**2021

**Practicum Supervisor:**Emily Casleton, , Computer, Computational, and Statistical Sciences , Los Alamos National Laboratory

Movement of nuclear material is important question to non-proliferation community. This work uses a statistical approach to predict the material which most likely produced a set of radiation measurements, taken by a network of Sodium-Iodide gamma detectors. This is done by running a large number of simulations while varying material type and geometric parameters. A detector spectrum is calculated for each simulation, and the measured data are compared to the simulated data to determine which material produced the most likely match.

Su(N)ny: Fast and easy generalized spin simulations for arbitrary Hamiltonians

Cole Miles, Cornell University

**Practicum Year:**2021

**Practicum Supervisor:**Kipton Barros, Staff Scientist, Theoretical Division, Los Alamos National Laboratory

The main aim of the project is to develop a user-friendly code which implements classical spin dynamics for arbitrary spin Hamiltonians on arbitrary lattice geometries. We aim to implement a wide variety of useful simulation tools,including advanced sampling techniques, efficient handling of long-range dipole forces, and acceleration using GPUs and parallelism. The end goal is to release this package publically so that it can be useful to both experimentalists, who wish to compare their data to a theoretical model but may not know how to code the simulation, as well as theorists who can now avoid re-writing spin simulation code for every new system.

Fast Emulation of Expensive Simulations using Approximate Gaussian Processes

Steven Stetzler, University of Washington

**Practicum Year:**2021

**Practicum Supervisor:**Michael Grosskopf, Scientist, Computer, Computational, and Statistical Science, Los Alamos National Laboratory

Fitting a theoretical model to experimental data typically requires evaluation of the model at various points in the model's input space. When the model is a slow-to-compute physics simulation, it becomes infeasible to evaluate the model an arbitrary number of times. This fact makes Bayesian model fitting using Markov chain Monte Carlo methods infeasible, since producing accurate posterior distributions of the best-fit model parameters typically requires thousands (or millions) of model evaluations. To remedy this, a model that predicts the simulation output, an "emulator," can be used in lieu of the full simulation during model fitting. The emulator of choice in previous work is the Gaussian process (GP), a flexible, non-linear model that provides both a predictive mean and variance at each input point. The Gaussian process works well for small amounts of training data (<10^3), but becomes slow to train and use for prediction when the dataset size becomes large. Various methods can be used to speed up the Gaussian process in the medium-to-large dataset regime (>10^5), trading away predictive accuracy for drastically reduced runtimes. In this project, we analyzed several of these approximate Gaussian process methods, focusing on the accuracy-runtime tradeoff, in emulating nuclear binding energies predicted by density functional theory (DFT) models using the UNEDF1 and UNEDF2 parameterizations of the Skyrme energy functional. This work allows calibration of the UNEDF model parameters to experimental data in a Bayesian manner to be computationally feasible.

Physics-informed machine learning for seismic inversion

Sarah Greer, Massachusetts Institute of Technology

**Practicum Year:**2020

**Practicum Supervisor:**Youzuo Lin, Scientist III, Earth and Environmental Sciences (EES-17), Los Alamos National Laboratory

InversionNet (https://arxiv.org/abs/1811.07875) is a novel machine learning model using CNNs that directly takes in seismic data and outputs the estimated velocity model that produced that data. In more "classical" geophysics, inverting for a model given the data is typically done using iterative methods and can be very computationally expensive. While InversionNet, after training, produces models quickly and efficiently, it does not incorporate any of the physics that define the forward and inverse problems. Seismic inversion is a field with very well-defined and easily-modeled physics. This summer, I introduced a way to incorporate the physics of the problem into the neural network. I did this by introducing an additional term in the objective function that involved forward modeling the wave equation using a finite difference solver. This increases the robustness of the network and allows the network to train faster compared to vanilla InversionNet.

Generic and flexible cross-section generation tool

Miriam Kreher, Massachusetts Institute of Technology

**Practicum Year:**2020

**Practicum Supervisor:**Jack Galloway, Research Scientist, NEN-5, Los Alamos National Laboratory

The flexible cross-section generation tool is Python-based software that runs the open-source Monte Carlo code OpenMC to generate and optimize cross-sections for any reactor geometry. The optimization aims to choose energy group structures as well as scattering representation that maximize accuracy with respect to continuous-energy results while avoiding significant computational expense. It then outputs these cross-sections in an ISOXML format readable by the Idaho National Lab code suite MOOSE

Detailed Kinetics Model for Condensed-Phase Reactive Materials

Logan Kunka, Texas A&M University

**Practicum Year:**2020

**Practicum Supervisor:**Chong Chang, Staff Scientist, XCP-2, Los Alamos National Laboratory

During my practicum at LANL I developed models which described condensed-phase reactions, incorporating chemical reactions and phase changes into a singular framework. Detailed kinetics resolved major species, reaction rates, and thermodynamic properties while capturing relevant physics at appropriate scales. These models were used to study DDT in condensed-phase reactive materials.

Data-driven nonparametric optimization of an ocean turbulence model

Justin Finkel, University of Chicago

**Practicum Year:**2019

**Practicum Supervisor:**Nathan Urban, Energy Security Fellow, CCS-2, Los Alamos National Laboratory

This summer, we attacked the difficult problem of turbulence parameterization with methods drawn from modern optimization and machine learning. Small-scale turbulence is key to understanding transport in the atmosphere and oceans, but impossible to sufficiently resolve in computer models at a global scale. Instead, turbulence is usually represented (or "parameterized") as enhanced diffusion, often according to ad-hoc functional forms. We took a popular scheme in boundary layer modeling, the K-profile parameterization, and made small perturbations to the diffusivity function using a Gaussian process, allowing for flexible exploration of function space. Using a basic optimization procedure, we achieved ~25% improvement from the default configuration, as measured by vertical heat flux compared to that in (higher-resolution) large eddy simulation.

Developing a General-Relativistic Smoothed Particle Hydrodynamics Code for Simulating Neutron Star Mergers

Steven Fromm, Michigan State University

**Practicum Year:**2019

**Practicum Supervisor:**Oleg Korobkin, Scientist 2, CCS-7, Los Alamos National Laboratory

Recent gravitational wave observations of neutron star mergers (NSMs) by LIGO have increased the need for more detailed and complex simulations of these events. State-of-the- art simulations are necessary for interpreting the observational data, and gaining insight into the underlying physics of NSMs, such as r-process nucleosynthesis and the nuclear equation of state. Numerically solving the Einstein field equations of general relativity (GR) in addition to the GR hydrodynamic equations is essential for the accurate simulations of NSMs. In my practicum project I worked on the development of SPaRTA, a novel hybrid GR smoothed particle hydrodynamics (SPH) code that couples a grid-based evolution of the Einstein field equations with a mesh-free Lagrangian SPH solver.

Solving a hydrologic inverse problem with a quantum annealer

Sarah Greer, Massachusetts Institute of Technology

**Practicum Year:**2019

**Practicum Supervisor:**Daniel O'Malley, Scientist, Earth and Environmental Sciences, Los Alamos National Laboratory

Quantum computing has advanced enough that solving basic inverse problems in fields of interest may be beneficial to show proof-of-concept results. The problems solved should be simple enough to work with the current early state of quantum computing hardware, which would be comparable to problems solved in the early stages of classical computing. We use LANLâ€™s D-Wave 2000Q quantum annealer to solve an indirect hydrologic inverse problem. In this problem, we have a synthetic survey where hydrologic head measurements are taken at regular intervals over an area of interest. We then use these measurements to invert for subsurface permeability over the survey area, where our model is constrained such that each permeability mesh node can take one of two known values. We invert for the model iteratively, where each model update takes the form of a quadratic unconstrained binary optimization problem which can be solved on the quantum annealer. Successfully solving problems of this type may help establish the importance of quantum computing for the future of hydrologic inverse analysis (LA-UR-19-2648).

Emulation and calibration of Mars Rover spectra for surface composition analysis

Claire-Alice Hebert, Stanford University

**Practicum Year:**2019

**Practicum Supervisor:**Kary Myers, Scientist, CCS-6, statistical sciences, Los Alamos National Laboratory

The Mars Rover carries onboard an instrument designed to measure the composition of surface rocks. Accurately identifying the component elements in a sample is complicated by nonlinear physics, which renders brute-force methods impossible. Instead, we studied the use of a statistical method, called emulation and calibration, to preform this disaggregation as a proof of concept solution for this problem.

Dynamic stability of microgrids under fluctuating power production and consumption levels

Anya Katsevich, New York University

**Practicum Year:**2019

**Practicum Supervisor:**Yury Maximov & Michael Chertkov, Postdoctorate Fellow & Technical Staff Member, Drs, CNLS/T-4, Los Alamos National Laboratory

Random fluctuations in power generation and consumption could impact the reliability of micro-grids, or small-scale localized electric grids. In this project, we investigated the reliability of low-inertia micro-grid dynamics under Gaussian fluctuations of power. In the weak noise regime, we estimated the expected time in which the random power grid dynamics leads to a violation of a security constraint. We also estimated the probability of the rare event in which a constraint is violated in an abnormally low amount of time.

Fully Local Quasidiffusion

Samuel Olivier, University of California, Berkeley

**Practicum Year:**2019

**Practicum Supervisor:**James Warsa, , CCS2, Los Alamos National Laboratory

A new cell-local discretization for the Quasidiffusion equations was developed and implemented in the Capsaicin radiation transport code at Los Alamos. The method uses information from the high-order solution to form interior interface conditions that decouple the cells avoiding the solution of a large system of equations.

Detecting CO2 hot spots in the Southern Ocean

Riley Brady, University of Colorado

**Practicum Year:**2018

**Practicum Supervisor:**Mathew Maltrud, Scientist IV, Theoretical Division (Group T-3), Los Alamos National Laboratory

The global oceans take up roughly 30% of anthropogenic CO2 emissions, with the majority of that uptake assumed to occur in the Southern Ocean. However, this estimate is based on sparse observations that are generally biased toward the summer months. Recently, CO2-measuring autonomous floats have measured significantly stronger CO2 outgassing from the Southern Ocean to the atmosphere in various "hot spots" around the Antarctic ice edge. A 2017 study used a high-resolution climate model to show that these hot spots exist physically, and are forced by ocean topography. However, no study to date has investigated the biogeochemical implications of these hot spots. Our goal is to use the state-of-the-art DOE global climate model to resolve these physical hot spots and to investigate their biogeochemical properties, such as anomalous air-sea CO2 and O2 exchange with the atmosphere.

Emulation for ChemCam Data Analysis

Claire-Alice Hebert, Stanford University

**Practicum Year:**2018

**Practicum Supervisor:**Kary Myers, Scientist, Statistical Sciences Group, Los Alamos National Laboratory

The Mars Rover carries onboard an instrument, ChemCam, designed to measure the composition of rocks using laser-induced breakdown spectroscopy (LIBS). The disaggregation of component elements given these spectra is complicated by so-called matrix effects, which influence the relative height of emission lines. The time intensive plasma physics code ATOMIC has been used to model these spectra, but using it for forward models is intractable given the large parameter space to explore. In particular, for disaggregation one must identify which elements are in the potentially very complex sample, as well as their proportions. Emulators have been proposed as a fast way to do this analysis, and my practicum project was aimed at exploring whether such methods could feasibly be used for this application.

Accelerating Time-Dependent Monte Carlo Algorithms with Explicit Delayed Neutron Precursors

Miriam Kreher, Massachusetts Institute of Technology

**Practicum Year:**2018

**Practicum Supervisor:**Travis Trahan, Research Staff, XCP-3: Monte Carlo Methods, Codes, & Applications , Los Alamos National Laboratory

This work quantified the computational savings from variance reduction techniques related to delayed neutron precursors in nuclear systems. Manipulating the weight of delayed neutrons with respect to prompt neutrons allowed the Los Alamos Monte Carlo Application ToolKit code to accelerate during its transient calculations.

Modular emulation and calibration using Hamiltonian Monte Carlo sampling in Stan in the context of dynamic compression experiments

Kelly Moran, Duke University

**Practicum Year:**2018

**Practicum Supervisor:**Earl Lawrence, Scientist, Computer, Computational, and Statistical Sciences, Los Alamos National Laboratory

I worked on an ongoing LDRD project over the summer. The goal of this LDRD project is developing the capability to accelerate knowledge and discovery from experimental scientific facilities in the context of dynamic compression experiments. These dynamic compression experiments consist of a multi-dimensional input parameter space (some of which is estimated, some of which is set by the experimenter) leading to a multi-dimensional output space. Inputs the experimenter sets include such parameters as time delay of X-ray probe pulse and angle of X-rays relative to shock. Those that must be estimated include shock pressure, material strength and crystal orientations. The measured outputs include velocimetry, diffraction, and imaging. The statistical component of the project focuses on improving experimental uncertainty via pre-built Gaussian process emulators that can be used quickly in later analyses. It is hoped that emulation can facilitate accurate experiment calibration, i.e. determine the distribution of physics parameters that best match the data. My work was on implementing calibration routine using Hamiltonian Monte Carlo sampling using Stan in this context.

A kinetic model for electron heating in antiparallel magnetic reconnection

Blake Wetherton, University of Wisconsin-Madison

**Practicum Year:**2018

**Practicum Supervisor:**William Daughton, Scientist, X-Theoretical Design (XTD), Los Alamos National Laboratory

This practicum project is a study of electron energization processes associated with magnetic reconnection through fully kinetic particle-in-cell simulations. We investigate a method of electron bulk heating wherein energy is exchanged through the parallel potential into bulk streaming energy in beams and is then converted thermally through an effective scattering process; this scattering process is based on the breakdown of the electron magnetic moment as an adiabatic invariant in the reconnection exhaust, which causes the distribution to be independent of the magnetic moment in that region. A simplified differential equation has been derived to explain this thermalization, though it depends on several parameters related to the efficiency of first and second order Fermi acceleration that do not yet have well-constrained values and dependencies. In this practicum, tools were designed to analyze this process and effectively compute the parameters relevant to the model. These tools were tested and verified on small VPIC simulations of antiparallel reconnection such that larger simulations can put appropriate bounds on the model and analyze dependencies on upstream parameters. One large simulation was started, and we intend to run more.

Snow on Sea Ice in the ACME climate model

Kelly Kochanski, University of Colorado

**Practicum Year:**2017

**Practicum Supervisor:**Elizabeth Hunke, Deputy Group Leader, Theoretical Division (T-3), Los Alamos National Laboratory

The ACME climate model is the Department of Energy's next-generation Earth System Model. I developed the snow thermodynamics of MPAS-seaice, the sea ice component of ACME, which is based on the Los Alamos Sea Ice Model (CICE) and widely used in Earth System models and shipping forecasts.

Accelerating molecular simulations of lipid bilayers

Sean Marks, University of Pennsylvania

**Practicum Year:**2017

**Practicum Supervisor:**Angel Garcia, Director, Center for Nonlinear Studies (CNLS), Los Alamos National Laboratory

Under the direction of Dr. Angel Garcia of the CNLS at Los Alamos National Laboratory (LANL), I studied a new molecular dynamics (MD) method for enhancing simulations of lipid bilayers. Such systems are of great interest in the physics of cell signaling, but possess exceptionally long time scales and are therefore very challenging to properly study. By applying the method of Replica Exchange with Solute Tempering (REST), we were able to converge our systemâ€™s statistics by roughly an order of magnitude faster than conventional MD.

Speeding up the scientific process at experimental x-ray facilities through the use of Gaussian process emulators

Kelly Moran, Duke University

**Practicum Year:**2017

**Practicum Supervisor:**Earl Lawrence, Scientist, Computer, Computational, and Statistical Sciences , Los Alamos National Laboratory

I worked on an ongoing LDRD project over the summer. The goal of this LDRD project is developing the capability to accelerate knowledge and discovery from experimental scientific facilities in the context of dynamic compression experiments. These dynamic compression experiments consist of a multi-dimensional input parameter space (some of which is estimated, some of which is set by the experimenter) leading to a multi-dimensional output space. Inputs the experimenter sets include such parameters as time delay of X-ray probe pulse and angle of X-rays relative to shock. Those that must be estimated include shock pressure, material strength and crystal orientations. The measured outputs include velocimetry, diffraction, and imaging. The statistical component of the project focuses on improving experimental uncertainty via pre-built Gaussian process emulators that can be used quickly in later analyses. It is hoped that emulation can facilitate accurate experiment calibration, i.e. determine the distribution of physics parameters that best match the data. My work was on incorporating both distributional and Markov-Chain Monte Carlo (MCMC) uncertainty into the prebuilt emulator and parallelizing the code. I also compared the performance of the Metropolis-Hastings algorithm and Hamiltonian Monte Carlo for parameter estimation in this context.

Extending Accelerated MD methods to soft-matter systems

Laura Watkins, University of Chicago

**Practicum Year:**2017

**Practicum Supervisor:**Arthur Voter, Dr., Theoretical , Los Alamos National Laboratory

AMD is a group of methods aimed at simulating systems at long timescales not attainable with regular MD. These methods are fairly well developed for hard material systems, but applying them to softer systems (such as proteins) is much more difficult and has not been solved. I worked on developing AMD methods to work for such systems--specifically, I focused on how to define kinetic states for these flexible systems.

Stability of pick-up ion distributions in the outer heliosheath

Kathleen Weichman, University of California, San Diego

**Practicum Year:**2017

**Practicum Supervisor:**Gian Luca Delzanno, , T-5 Applied Mathematics and Plasma Physics, Los Alamos National Laboratory

The IBEX ribbon, a bright streak of energetic neutral atom emission observed by the IBEX spacecraft, is believed to be caused by pick-up ions (PUIs) in the outer heliosheath experiencing the local interstellar magnetic field (LISM). These pick-up ions originate as fast solar wind neutrals and undergo charge in the outer heliosheath, launching them into a helical orbit along the magnetic field lines. If the direction of the solar wind neutral is perpendicular to the LISM, the orbit is circular rather than helical and, provided the PUI distribution is not destroyed by instabilities, another eventual charge exchange may take it back towards Earth as an energetic neutral atom, where it can be collected by IBEX. While this is the generally accepted explanation of the IBEX ribbon, the survival of PUI distributions in the outer heliosheath is called into question by a simple linear stability analysis. Traditional particle-in-cell (PIC) based simulation methods have thus far been unable to capture PUI dynamics over the 2 year charge exchange time due to the necessity of resolving the short (40 s or sub-second) ion or electron scales. The goal of my practicum project was to apply a new simulation tool, Spectral Plasma Solver (SPS), to the PUI stability problem in the hopes of making a definitive statement about the proposed origin of the IBEX ribbon. Because SPS is an implicit spectral Vlasov method, it has the advantages over PIC methods of being free from statistical noise and able to step over time scales. During my practicum, I successfully simulated realistic pick-up ion distributions while stepping over electron time scales by a factor of 200,000, a first for this problem.

Code Interfacing for Practical Implementation of the Coupled Wavepackets Algorithm for Nonadiabatic Dynamics

Morgan Hammer, University of Illinois at Urbana-Champaign

**Practicum Year:**2016

**Practicum Supervisor:**Sergei Tretiak, Technical Staff Member, Theoretical Division and CINT, Los Alamos National Laboratory

The goal of this project was to interface two in-house codes produced within the Tretiak group in order to allow the recently developed coupled wavepackets algorithm to be used to study molecular systems. Previously, the algorithm has only been used to study model systems.

Optimization of Parameterizations for Density Functional Tight Binding Theory using Machine Learning

Aditi Krishnapriyan, Stanford University

**Practicum Year:**2016

**Practicum Supervisor:**Marc Cawkwell, Staff Scientist, Los Alamos Theoretical Division, Los Alamos National Laboratory

In this project, a novel, fully automated optimization package utilizing some machine learning techniques was used to optimize density functional based tight-binding (DFTB) parameters described by semi-empirical simplified functional forms. Essentially, the goal was to get close to the accuracy of density functional theory while maintaining the speed of calculations of tight-binding theory. This parameterization scheme is transferable and greatly reduces errors in atomization energy, molecular geometry, and molecular dipole moment upon optimization. The error is also minimized for initial parameters with up to 10% perturbation, displaying flexibility in choice of initial parameter predictions. This optimization package was applied to LATTE, a tight-binding code developed at LANL.

Cold atmospheric plasma-based electrostatic disruption of bacteria and cancer cells

Kathleen Weichman, University of California, San Diego

**Practicum Year:**2016

**Practicum Supervisor:**Gian Luca Delzanno, Research Scientist, T-5 Applied Mathematics and Plasma Physics, Los Alamos National Laboratory

The search for novel bacterial disinfection and cancer treatment techniques has resulted in a new application for cold atmospheric plasma (CAP) devices at the intersection of plasma physics and medicine. CAP exposure has been successfully used to destroy bacteria and selectively kill cancer cells in vitro and in vivo, but the theoretical underpinning has neglected a full discussion of plasma physics effects related to the experimental parameter regime. My practicum project was to bring a discussion of plasma charging in collisional plasmas to the field of plasma medicine. Specifically, previously neglected plasma capacitance effects lower the threshold for electrostatic disruption of bacteria and render possible the selective disruption of cancer cells under direct plasma exposure.

Power System Estimation

Tommie Catanach, California Institute of Technology

**Practicum Year:**2014

**Practicum Supervisor:**Russell Bent, Staff Scientist, Energy and Infrastructure Analysis, Los Alamos National Laboratory

Developing methods for state estimation and system identification are essential for increasing the reliability of the power grid since it is becoming increasingly complex and subject to more disturbances. Typically this problem has been solved on steady state time scales however the dynamics are becoming more important to power systems necessitating quicker estimation. With the deployment of phasor measurement units (PMUs) throughout the system this fast estimation is now possible. To do this fast estimation a layered learning architecture is essential that integrates state estimation, change point detection, and classification of disturbances. By thinking of these estimation algorithms and the controls as a layered system it improves our ability to design optimal architectures that are both fast and flexible. State Estimation can be achieved using Kalman filtering and particle filtering based techniques which assume a system topology and dynamics model. These techniques are adapted to the differential algebraic equations that describe the power system and their robustness to noise estimates and the number of PMUs is explored. Using the estimates from these filters we can make forward prediction of the future system which can then be compared to the actual PMU data to identify large unexpected deviations. These change points then trigger a topology change classifier to identify the new topology of the system after a failure such as a line loss.

Chance constrained optimal power flow

Miles Lubin, Massachusetts Institute of Technology

**Practicum Year:**2014

**Practicum Supervisor:**Russell Bent, Staff Scientist, Energy and Infrastructure Analysis Group, Los Alamos National Laboratory

During the practicum, I worked with researchers at LANL, fellow summer students, and a professor at Columbia on developing, implementing, and evaluating a model for integrating highly variable renewable energy from wind into a power-grid control problem called optimal power flow, which sets generation levels to match demand on a short term scale. Treating deviations from wind generation forecasts as a random variable, we introduced so-called chance constraints into the optimization problem using a model that remained practically tractable. In a realistic computational study, we found that the model had tangible operational benefits in terms of reducing costs and real-time corrective actions.

Task-based dictionary learning using neural networks

Britni Crocker, Massachusetts Institute of Technology

**Practicum Year:**2013

**Practicum Supervisor:**Garrett Kenyon, , Physics, Los Alamos National Laboratory

We used a neural-network-based model for sparse coding with Hebbian connections to learn a task-based dictionary for both image reconstruction and image categorization. Previous efforts in this area have built such dictionaries with greedy algorithms or by creating many one-vs-all dictionaries for each category. Our approach was to train all layers of the neural network simultaneously, with one dictionary to separate all categories; this way, our algorithm is easily implementable in hardware and scales with the number of categories.

Linear-Multi-Frequency-Grey Preconditioning for Radiative Transfer Sn Calculations

Andrew Till, Texas A&M University

**Practicum Year:**2013

**Practicum Supervisor:**Jim Warsa, , Computational Physics and Methods Group (CCS-2), Los Alamos National Laboratory

I worked on a neutral-particle physics code at the lab, implementing an acceleration scheme to reduce the number of iterations required for convergence. I compared two possible formulations of a method for efficiency.
For those with a nuclear engineering background, we were working in Capsaicin, investigating linear multifrequency gray (LMFG) preconditioning on the radiation transport equation applied to thermal photons. We investigated using a scalar flux or an absorption rate density as our primary unknown. The advantage of using the former is that scattering can be accounted for cheaply; the advantage of using the latter is that the vector sizes are smaller, which ought to lead to faster computation. We found the difference in vector size had a negligible effect but the ability to do scattering without inner iterations had a strong effect on iteration count and on the time to solution.

Electronic descriptors for the prediction of photovoltaic properties of polymers

Jarrod McClean, Harvard University

**Practicum Year:**2012

**Practicum Supervisor:**Sergei Tretiak, Staff Scientist, Theoretical Division Group T-1/CINT, Los Alamos National Laboratory

The project involved taking a set of molecules, whose properties are known experimentally, and attempting to predict the open circuit voltage which results from a bulk-heterojunction photovoltaic built from a polymer of that molecule and PCBM. We wished to build a set of electronic descriptors which could be used to predict the performance of certain materials before they are manufactured. These electronic descriptors were derived from ab initio quantum chemistry calculations.

Using Chemical and Structural Features to Predict Transcription Factor Binding Sites

Mark Maienschein-Cline, University of Chicago

**Practicum Year:**2011

**Practicum Supervisor:**Bill Hlavacek, , Center for Nonlinear Studies, Los Alamos National Laboratory

My project aimed to use known transcription factor binding sites in DNA to predict other sites. For many transcription factors, a small number (ranging from a handful to several dozen) of binding sites are known from direct experimental evidence. Many methods exist that use the DNA letter sequences of these binding sites to construct a position weight matrix (PWM), which is then used to predict binding sites.
However, transcription factors and DNA are molecules, so their interaction is governed by the local shape and electrostatics of the DNA, not by the DNA letter sequence. Our goal was to summarize these interactions by computing structural and chemical features of DNA and DNA-transcription factor complexes, and use these features to train a support vector machine (SVM) to classify (predict) other potential binding sites. We obtained a significant improvement over the usual PWM methods, which we can attribute to both the SVM algorithm used, as well as the specific chemical and structural features we calculated.

Dynamics of the Quantum Phase Transition in the Mixed Field Ising Model

Norman Yao, Harvard University

**Practicum Year:**2011

**Practicum Supervisor:**Wojciech Zurek, Laboratory Fellow, Theory Division (T-4), Los Alamos National Laboratory

The transverse field Ising model (TFIM) is one of the paragons of a quantum phase transition; when the coupling and field strength are equivalent, it exhibits a transition between a ferromagnetic and paramagnetic state. Amazingly, this complex model of elementary spins can actually be solved exactly by mapping the problem onto that of non-interacting fermions via the Jordan-Wigner transformation. However, once a longitudinal field is turned on, the model is generally no longer exactly solvable - except at the TFIM critical point. The solution at this critical point was developed by Zamoldochnikov and involves the mapping to an Ising field theory. Recent experiments by Coldea et al. have claimed to demonstrate a remarkable prediction of the field theory, namely that the lowest energy eigenstates are governed by the E8 Lie algebra. In my project, rather than examining static energies, we are examining the dynamics of the mixed field transition, in the hope that the emergent E8 symmetry will leave artifacts in a quench experiment.

Ice Sheet Model Integration

Tobin Isaac, University of Texas

**Practicum Year:**2010

**Practicum Supervisor:**William Lipscomb, , Computational Fluid Dynamics (T-3), Los Alamos National Laboratory

For the first time ever, climate models are coming online which include dynamic ice sheets that interact with the ocean and atmosphere. At the same time, a plethora of newer, more sophisticated models of ice sheet dynamics are being designed by researchers around the world. The project was to create a common interface for these models with the Community Ice Sheet Model (CISM), which is the ice component of the Community Earth Systems Model (CESM). Such an interface allows modelers to take advantage of realistic forcing from CISM, and also allows CISM to seamlessly integrate advances as they occur.

cl.egans: A high-performance spiking neural network simulation package

Cyrus Omar, Carnegie Mellon University

**Practicum Year:**2010

**Practicum Supervisor:**Garrett Kenyon, Staff Scientist, Physics, Los Alamos National Laboratory

cl.egans is an OpenCL-accelerated, Python-based neurobiological circuit simulation package which I developed over the summer, concurrently with the development of anew programming language for OpenCL programming called cl.oquence. This language incorporated a novel static, structural type system with automatic type inference from within a parent dynamic language. This setup was leveraged to produce a novel extensible type system which merged the concepts of LISP-style macros and metaobject protocols.
Using these features, cl.egans operated as a tree-based simulation construction language, and included features such as automatic replication for simulations which require multiple realizations of a single network and several analysis tools.

Velvetrope: an algorithm for rapidly finding local
alignments between a sequence of interest (SOI) and multiple test
sequences.

Scott Clark, Cornell University

**Practicum Year:**2009

**Practicum Supervisor:**Nick Hengartner, Group Leader, Discrete Simulation Sciences (CCS-5), Los Alamos National Laboratory

We developed an algorithm that rapidly finds local alignments within genetic sequences. It uses a novel bit-shift algorithm that allows it to find areas of highly probable local alignment irregardless of positioning within a sequence. It can be used to find subsequences of interest from within a larger sequence compared to others or discover new highly conserved binding regions. One of the main advances is the speed which is orders of magnitude faster than the current standard multiple sequence alignment algorithms.

Comparative Monte Carlo Efficiency by Monte Carlo Analysis

Brenda Rubenstein, Columbia University

**Practicum Year:**2009

**Practicum Supervisor:**Dr. James Gubernatis, , Theoretical, T-4, Los Alamos National Laboratory

The acceptance ratio has long been a trusted a rule of thumb for characterizing the performance of Monte Carlo algorithms. But, is this trust entirely merited? In this work, we illustrated that the second eigenvalue of a Markov Chain Monte Carlo algorithm's transition matrix is more indicative of the algorithm's underlying convergence than is an acceptance ratio. By monitoring the second eigenvalue of the Metropolis and Multiple-Site Heat Bath algorithms as applied to the one and two dimensional Ising models and that of the Metropolis algorithm as applied to a series of coupled oscillators with infinite numbers of transition matrix elements, we found that the second eigenvalue is better able to capture convergence behavior that is temperature-independent. Furthermore, trends in the second eigenvalue suggested that the Metropolis algorithm converges faster than Multiple-Site Heat Bath algorithms and that the convergence of all algorithms slows as system sizes grow. The second eigenvalue was computed for small systems sizes via standard matrix diagonalization methods as well as a deterministic modified power method. For system sizes whose subdominant eigenvalues could not be obtained deterministically without excessive computational expense, we employed a novel Monte Carlo version of the modified power method. This new approach becomes of paramount importance in the study of chained oscillators, as it represents the simplest algorithm currently available for calculating the second eigenvalues of systems with a continuous phase space. Our work outlined new approaches for characterizing the performance of Monte Carlo algorithms and determining the second eigenvalue of a very general class of matrices and kernels that can be applied throughout the physical sciences.

Negative Flux Fixups for Discontinuous Finite Element SN Transport

Steven Hamilton, Emory University

**Practicum Year:**2008

**Practicum Supervisor:**James Warsa, , Computer, Computational and Statistical Sciences, Los Alamos National Laboratory

My practicum project involved developing numerical algorithms to remedy the occurrence of negative solutions which arise in solving the radiation transport equation. The true solution to the radiation transport equation is always positive, and so artificial negative solutions are extremely undesirable as they can lead to instabilities in various solution strategies. By adding a non-linear "fixup" to an existing transport solver, the goal is to produce an output which satisfies known physical properties of the true solution.

Wavelet Transform techniques in Multigrid and Asynchronous Fast Adaptive Composite (AFAC) algorithms

Zlatan Aksamija, University of Illinois at Urbana-Champaign

**Practicum Year:**2007

**Practicum Supervisor:**Bobby Philip, Technical Staff Member, T7 Theory, Simulation, and Computation Directorate, Los Alamos National Laboratory

This project focused on using the wavelet transform techniques to decouple coarse and fine components of a solution as part of coarsen and refine operators in multigrid and composite grid solvers. The Wavelet transform has advantages in terms of flexibility, computational efficiency, and power to resolve different scales of a solution which allow it to be used effectively in multigrid-based algorithms. We were able to show that the perfect reconstruction properties of the wavelet transform make it possible to accomplish asynchronous algorithms with excellent convergence properties. This is especially useful for large-scale parallel solvers since various scales of a problem are effectively decoupled and can be iterated on independently and in parallel.

Comparison of a Rule-Based and a Traditional Pathway Model of a Signal Transduction System

Jordan Atlas, Cornell University

**Practicum Year:**2007

**Practicum Supervisor:**James Faeder, Technical Staff Member, Theoretical Biology and Biophysics Group (T-10), Los Alamos National Laboratory

For the summer practicum I investigated parameter estimation for rule-based models of biological systems with James Faeder at Los Alamos National Laboratory. A rule-based model is one where a set of generalized reactions (i.e. rules) specify the features of proteins that are required for or affected by a particular protein-protein interaction. Parameter estimation studies in rule-based models are important because it is unclear to what degree the predictions of rule-based models can be constrained by experimental data. Therefore, a better understanding of these models and their parameter sensitivity could lead to better predictions in models of complex biological networks.
Dr. Faeder's group has developed the BioNetGen software for generating sets of chemical species and reactions from sets of rules. The overall goal of this project was to examine the extent to which parameter estimates for rule-based models can be refined based on qualitative observations. In particular, we would like to determine what kinds of information have the largest effect on reducing the size of the feasible parameter space, by which we mean the range of parameters over which the model predictions remain consistent with the data, and the magnitude of the uncertainty in the model predictions.

Adaptive Mesh Refinement for Modeling Magneto-Hydrodynamic Plasmas

Mark Berrill, Colorado State University

**Practicum Year:**2007

**Practicum Supervisor:**Bobby Philip, Technical Staff Member, T-7, Mathematical Modelling and Analysis, Los Alamos National Laboratory

We worked on modifying a magneto-hydrodynamic code called pixie3d to include adaptive mesh refinement. Pixie3d is a plasma code that is used to model several phenomena, including magnetic reconnection in tokamaks. Because of the difference in length scales between the feature size of the current sheets (which must be resolved) and the size of the plasma, it is impossible to use a single fixed grid to cover the entire domain in 3D. The project involved merging the code pixie3d with a software package called SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) to allow for different grid resolutions over different parts of the domain. Additionally, these resolutions can be changed and adapted as the problem evolves.

Multilevel upscaling for multiphase porous flow.

Ethan Coon, Columbia University

**Practicum Year:**2006

**Practicum Supervisor:**David Moulton, Staff Researcher, T-7 Mathematical Modeling and Analysis, Los Alamos National Laboratory

Many geophysical applications, including porous flow, require the resolution of fine scale features and parameters on coarse scale models. Simply averaging out the fine scale often loses important information about small scale features such as interfaces that greatly change the global dynamics. Therefore, we have worked to derive and apply upscaling methods that more accurately represent the effects of fine scale data on coarse scale simulations.

Modeling genetic regulation as a highly canalized boolean network

Jeffrey Drocco, Princeton University

**Practicum Year:**2006

**Practicum Supervisor:**Cynthia Reichhardt, , T-12, Los Alamos National Laboratory

This project seeks to understand in a very basic way how organisms balance stability of phenotype with genetic variation by modeling genes as binary switches which interact via boolean functions. Theoretical studies suggest that any network which can plausibly model this phenomenon are of the highly canalized type, but few further details are known.

Parameter Estimation in a Kinetic Model of the marRAB Operon in Eschericia coli

David Markowitz, Princeton University

**Practicum Year:**2006

**Practicum Supervisor:**Michael Wall, Team Leader, Computer and Computational Sciences & Bioscience, Los Alamos National Laboratory

The objective of this project was to build a kinetic model of an activatable operon, marRAB, in the E. coli bacterium. We explored the relationship between free parameters in this model and their effects on transcriptional output. By matching simulated expression profiles to experimental data, we were able to constrain free parameters and make experimentally testable predictions for previously unknown equilibrium constants in this system.

Accurate and Robust Monte Carlo-Diffusion Interfaces

Gregory Davidson, University of Michigan

**Practicum Year:**2005

**Practicum Supervisor:**Jeff Densmore, Staff Scientist, CCS4, Los Alamos National Laboratory

Monte Carlo is a technique used for solving the radiative transfer equations computationally. The diffusion equation is an approximation to the radiative transfer equation that is accurate in certain (diffusive) regimes. Discrete Diffusion Monte Carlo is a computational technique whereby a discrete diffusion equation is solved using a particle-based Monte Carlo technique in those regimes where the diffusion approximation is accurate, and the radiative transfer equation using traditional Monte Carlo is used elsewhere.
This project was concerned with accurately interfacing the Monte Carlo and the discrete diffusion domains. First, we investigated an emissivity-preserving interface. Emissivity must be preserved to ensure that radiation penetrates into diffusive regions accurately. We derived an emissivity-preserving scheme that correctly allowed radiation to penetrate into diffusive regions. Secondly, we investigated asymptotically-correct angular distributions for diffusion particles leaking out of diffusive regions as well as Monte Carlo particles that are not allowed to penetrate into diffusive regions. Previous methods always used an isotropic angular distribution, which is generally not correct.

A Hybrid Monte Carlo-Deterministic Transport Method for Efficient Global Transport Solutions

Allan Wollaber, University of Michigan

**Practicum Year:**2005

**Practicum Supervisor:**Todd Urbatsch, Dr., Computer and Computational Sciences (CCS-4), Los Alamos National Laboratory

We introduce a new hybrid transport method for solving global neutral particle problems. In the method, one generates an estimate of the global solution using an inexpensive deterministic method and calculates the multiplicative correction to this solution using known Monte Carlo techniques. We demonstrate the method on 1D time dependent and steady state neutron transport problems, and show that it is very competitive for problems in which there are large gradients in the flux (for example, wavefronts and deep penetration problems).

Geometric Monodromy & Variational Integrators

Nawaf Bou-Rabee, California Institute of Technology

**Practicum Year:**2004

**Practicum Supervisor:**Darryl D. Holm, Lab fellow, Center for Nonlinear Studies T-7, Los Alamos National Laboratory

This summer involved extending ideas from recent progress in geometric monodromy and variational integration theory to answer fundamental questions on the global behavior of dynamical systems. Geometric monodromy is a powerful new way to look at the global phase space of a dynamical system (see I. Stewart's "Quantizing the classical cat", Nature 430: 731-732, [2004]). Darryl guided me through this research area this summer. Variational integration is a numerical technique that (to machine roundoff) discretely preserves symmetries and the symplectic structure of a dynamical system. We were primarily concerned with understanding the variational structure of some new integration methods that have excellent properties (see P. Krysl's "On Endowing an Explicit Time Integrator for Rotational Dynamics of Rigid Bodies with Conservation Properties" submitted to I. J. for Numerical Methods in Eng'g. [2004]).

Robustness in Genetic Circuits: Clustering of Functional Responses

Mary Dunlop, California Institute of Technology

**Practicum Year:**2004

**Practicum Supervisor:**Michael Wall, Technical Staff Member, Computer and Computational Sciences, Los Alamos National Laboratory

We all know about DNA - the double helix that encodes genetic information - but how is that information processed, how is it used in the cell? The information encoded in a strand of DNA is copied and then translated into a protein that does something useful for the cell. For example, the protein may be an enzyme that breaks down sugars. Gene expression - whether proteins are made from the DNA or not - can be turned on and off in response to external and internal stimuli.
Feedback and feed forward loops are used to regulate the gene expression process. These control elements ensure that genes can be expressed quickly and accurately in response to stimuli.
There are certain characteristic patterns that occur over and over in genetic regulatory networks throughout different parts of the cell. Why are these network motifs so common? What is it about their structure that favors them over other network configurations? If we know the structure of a network, can we determine it's function?

Singular Solutions to a Partial Differential Equation for Computer Imaging

Samuel Stechmann, New York University

**Practicum Year:**2004

**Practicum Supervisor:**Darryl Holm, Laboratory Fellow, T-7, Los Alamos National Laboratory

In computer imaging, a partial differential equation (PDE) called "EPDiff" arises in problems of deforming one image into another. The equation has been studied in Euclidean space, and some researchers have suggested that more complicated spaces could also be applicable for computer imaging problems. As a first step to understanding EPDiff on non-Euclidean spaces, we studied it on two simple non-Euclidean spaces: the sphere and hyperbolic space. The solutions we focused on were singular solutions which have a peak, giving them a jump in their first derivative and making them difficult to handle numerically.

Developing an Efficient Algorithm for Parallel MCNPX Kcode Calculations

Nathan Carstens, Massachusetts Institute of Technology

**Practicum Year:**2003

**Practicum Supervisor:**Gregg McKinney, Technical Staff Member, MCNPX (D10), Los Alamos National Laboratory

My research at Los Alamos National Laboratory focused on improving the efficiency MCNPX parallel kcode calculation while exactly tracking the sequential code. MCNP is a large radiation transport code used by about 3,000 users probably making it the largest nuclear science code. While MCNP performs well in parallel source calculations, parallel kcode calculations where strongly limited by significant communication requirements during calculation.
My new algorithm eliminated the vast majority of communication during kcode calculations allowing more efficient utilization of large parallel machines. Preliminary test results show an order of magnitude speedup for a 60 node cluster when comparing the new and old code. The new code will be incorporated into MCNPX as the default kcode algorithm in December 2003.

Development of an object-oriented, parallel, fully-implicit, finite-volume code for modeling multi-phase subsurface flows

Richard Mills, College of William and Mary

**Practicum Year:**2003

**Practicum Supervisor:**Peter Lichtner, , EES-6, Los Alamos National Laboratory

The capability to model multi-phase, reactive subsurface flows in high resolution is important to many environmental missions of national interest. Effective models are necessary for such tasks as environmental remediation of contaminated sites or preventing contamination of important aquifers.
I have worked with Peter Lichtner of Los Alamos National Lab to develop a parallel subsurface flow code, PFLOW, to interface with his existing parallel reactive transport code, PTRAN. Coupled together, these codes will be used to study subsurface reactive flow and transport problems at very high resolutions using parallel computers such as the 1024 processor QSC machine at LANL.

Numerical Modeling of Binary Solidification

Nathaniel Morgan, Georgia Institute of Technology

**Practicum Year:**2003

**Practicum Supervisor:**Brian VanderHeyden, Dr., Theoretical, Los Alamos National Laboratory

My practicum research at Los Alamos National Laboratory focused on computational modeling of binary alloy solidification using a multi-field approach combined with finite-volume discretization methods. In binary alloy solidification some unique flow patterns exist; the physical cause is still unknown. The object of my research was to expand the capabilities of a new multi-physics code for the purpose of better understanding the fluid dynamics associated with binary alloy solidification.

Extension of the "Data Dependent Hypothesis Classes" framework to Regression Problems.

Michael Wu, University of California, Berkeley

**Practicum Year:**2003

**Practicum Supervisor:**Don R. Hush, Dr., Group CCS-3, Los Alamos National Laboratory

This is a theoretical and mathematical practicum which involves proving fundamental theorems for uniform convergence of empirical risk to the expected risk over data dependent function classes. In traditional VC theory, structural risk minimization uses a function classes that are independent of data. Without specifying the hierarchy of nested hypothesis classes, a learning algorithm could spend much resources searching within hypothesis classes that do not contain a good approximation of the target function. Using data dependent function classes is a general method for incorporating our bias and prior knowledge obtain from the training data. The goal of this practicum is to prove a uniform law of large number over data dependent hypothesis class for regression problems. This establishes the existence of a consistent learning algorithm over data dependent hypothesis classes for regression, which can significantly reduce computational load and possibly give a much better confidence bound in the small sample limit.

SPH Code Validation and Addition of Particle Splitting

Marcelo Alvarez, University of Texas

**Practicum Year:**2002

**Practicum Supervisor:**Michael Warren, Staff Member, T-6, Los Alamos National Laboratory

The smoothed particle hydrodynamics (SPH) method is a particle-based gridless Lagrangian method for simulating astrophysical flows. It is very versatile because it naturally allows for adaptive spatial resolution and is free from the complications imposed by solving the hydrodynamic equations on a grid. Recently, Mike Warren and Chris Fryer have begun an exciting collaboration in which they are applying the SPH method to the simulation of core-collapse supernovae, a very computationally demanding problem. This collaboration has already lead to the first fully three-dimensional simulation of such a supernova, giving new insight into the puzzle of how these stars explode and how they lead to the remnants we observe today. This could only be possible with the development of an efficient, parallel SPH code and access to some of the world's fastest computers.
My practicum work consisted of getting to know this SPH code, understanding the algorithm behind it, testing it on problems with known solutions, and trying to improve it by adding new twists to the existing algorithm. In particular, I became involved in adaptive particle splitting, a technique which is similar in spirit to adaptive mesh refinement (AMR). In adaptive particle splitting, SPH particles are split in regions where more resolution is desired, allowing a significant increase in the dynamic range or resolution of the calculation, while only modestly increasing the computing time. In future work, I hope to apply the particle splitting method to problems ranging from supernova explosions to the formation of large-scale structure in the universe.

Method for modeling receptor-ligand interaction without a specified aggregation length.

Annette Evangelisti, University of New Mexico

**Practicum Year:**2002

**Practicum Supervisor:**William S. Hlavacek, Technical Staff Member, Theoretical Biology and Biophysics, T-10, Los Alamos National Laboratory

In this project we developed a method for modeling receptor-ligand interaction that does not restrict the number of interactions or the length of aggregation. Here, this method is applied to a bi-valent receptor and bi-valent ligand but is easily extended to the multi-valent receptor and ligand case. The method utilizes several previously published algorithms to show that the problem is tractable.

Investigation of Excited States of fac-Rhenium Tris Carbonyl Complexes Through DFT and TDDFT calculations

Nouvelle Gebhart, University of New Mexico

**Practicum Year:**2002

**Practicum Supervisor:**Jeff Hay, Staff Member/Laboratory Fellow, Theorectical Chemistry Group, T-12, Los Alamos National Laboratory

This project is a collaboration between physcial experimentalists and computational investigastions of six coordinate Rhenium tris carbonyl complexes. These complexes are being investigated for their potential use in LED devices. The lowest lying excited state of these complexes has been shown to exist in four different configurations: MLCT (metal to ligand charge transfer), LLCT (ligand to ligand charge transfer), sigma to pi* charge transfer, and redox seperated states. The lowest lying excited state is important becuase this will influence the non-radiative relaxation of the molecule to the ground state. This will inlfuence the viability of the molecule to be used for an LED. Currently we are invesigating how changing two of the ligation sites to the metal will influence this excited state.

Construction of Adaptive Mesh Transport Discretizations that Meet the Thick Diffusion Limit

Heath Hanshaw, University of Michigan

**Practicum Year:**2002

**Practicum Supervisor:**Jim Morel, Transport Methods Group (CCS-4), Computer and Computational Sciences Division, Los Alamos National Laboratory

Radiation transport calculations are generally large in scale, and, when coupled to hydrodynamics calculations, may constitute the overwhelming majority of computational time. Currently, the most effective transport discretization scheme is a discontinuous finite element method (DFEM) developed over the past ten years that meets the thick diffusion limit and can be accelerated with a diffusion preconditioner. However, this scheme does not couple well with hydrodynamics meshes, and in particular, has not been successfully adapted to work on a Cartesian adaptive mesh (and still meet the thick diffusion limit). The goal of this project is to develop a transport discretization scheme that is "simpler" than the DFEM scheme so that it can work on an adaptive mesh, but that still meets the thick diffusion limit and can be effectively accelerated with a diffusion preconditioner.

Quasi-chemical Approximation Applied to an Oil/Water/Surfactant System.

Joyce Noah-Vanhoucke, Stanford University

**Practicum Year:**2002

**Practicum Supervisor:**Lawrence R. Pratt, Technical Staff Member, T-12: Theoretical Chemistry and Molecular Physics, Los Alamos National Laboratory

Using the quasi-chemical approximation, we investigated a system of surfactant chains in a solution of oil and water units. The system was modeled as a 2-dimensional Ising system on a lattice. The goal of the project was to come up with a simple theory to obtain thermodynamic information about the system, and to generate a phase diagram of the system.

Coarse Grained Models of Deformation and Phase Transitions

Ryan Elliott, University of Michigan

**Practicum Year:**2001

**Practicum Supervisor:**Avadh Saxena, Staff Member, T-11 Condensed Mater and Statistical Physics, Los Alamos National Laboratory

Parallel Reactive Transport Modeling Using the PETSc Library

Glenn Hammond, University of Illinois at Urbana-Champaign

**Practicum Year:**2000

**Practicum Supervisor:**Peter Lichtner, Staff Scientist, Earth and Environmental Science Division (EES-6), Los Alamos National Laboratory

During my practicum, I developed a fully implicit, reactive transport model using parallel data structures and functions/subroutines from the PETSc (Portable, Extensible Toolkit for Scientific Computation) library developed at Argonne National Laboratory (http://www.mcs.anl.gov/petsc). The purpose of this research was two-fold: (1) to become familiar with PETSc data structures and functionality and (2) to experiment with reactive transport in a parallel-computation environment. I will use the experience gained during the practicum to parallelize the existing reactive transport code, FLOTRAN, developed by Peter Lichtner at Los Alamos National Laboratory.

Modeling of HIV Quasispecies Dynamics and Treatment Strategies

Lee Worden, Princeton University

**Practicum Year:**2000

**Practicum Supervisor:**Alan Perelson, Group Leader, Theoretical Biology and Biophysics, Los Alamos National Laboratory

Using computer models in conjunction with HIV sequence and drug resistance data, to create estimates of HIV quasispecies structure and predict the effects of treatment strategies on HIV population dynamics and evolution.

Multigrid on an Irregular Domain

Jon Wilkening, University of California, Berkeley

**Practicum Year:**1999

**Practicum Supervisor:**Pieter Swart, , , Los Alamos National Laboratory

We developed a multigrid approach to solving elliptic boundary value problems using a uniform grid; the region of interest is multiply-connected, and the boundary does not line up with grid points.

Geometrically Conforming Weight Functions in MLSPH Moving Least Squares Particle Hydrodynamics (MLSPH)

John Dolbow, Northwestern University

**Practicum Year:**1998

**Practicum Supervisor:**Dr. Gary Diltz, , Hydrodynamics Methods, Los Alamos National Laboratory

Geometrically Conforming Weight Functions in MLSPH Moving Least Squares Particle Hydrodynamics (MLSPH) is a new meshless Lagrangian method adapted from SPH in the XHM group at LANL. One of the central problems with particle codes which model problems in continuum mechanics arises from the geometric representation leads to numerical fracture in which the simulation breaks apart without the presence of a physical fracture model. The representation also prevents the methods from efficiently modeling thin plates and shells. My project involved modeling the particles as arbitrary shapes, and then conforming these shapes to the material deformation. I incorporated the new technique into the existing MLSPH particle code in XHM, and then demonstrated the improvements with several example problems.

AMRH

Matthew Farthing, University of North Carolina

**Practicum Year:**1998

**Practicum Supervisor:**Dr. Gary Diltz, , , Los Alamos National Laboratory

AMRH is an object oriented adaptive mesh refinement class library. It is designed to work both in serial and parallel environments. Currently, it provides a data parallel model, but work is being done to include task parallelism.

Integrable vs. Nonintegrable Geodesic Soliton Behavior

Oliver Fringer, Stanford University

**Practicum Year:**1998

**Practicum Supervisor:**Dr. Darryl Holm, , , Los Alamos National Laboratory

Pseudospectral solution to the family of pdes which admit delta functions as their solutions, which are solitons when convolved with the desired Green's function shape. Integrable or not, the equations support elastic collisions and sorting by height among the desired "pulson" shapes. These pulsons exist on an invariant manifold whose composition is initialized with an arbitrary initial condition.

Development of a Code for the P-1 Equations of Radiation Hydrodynamics Currently

Jeffrey Hittinger, University of Michigan

**Practicum Year:**1997

**Practicum Supervisor:**Dr. Robert Lowrie, , Scientific Computing Group, Los Alamos National Laboratory

Development of a Code for the P-1 Equations of Radiation Hydrodynamics Currently, many simulations in radiation hydrodynamics use the simple radiation-diffusion model. A less simplified, hyperbolic model for radiation hydrodynamics is the P-1 system of equations. It is of interest to develop a simulation code based on the latter model, but numerically, this is challenging since the P-1 system can have very disparate wave speeds and very stiff source terms. The project was to develop the necessary algorithms for the P-1 system and to implement them for realistic 2D and 3D simulations.

Magnetohydrodynamic Modelling using PPMA Framework

Mayya Tokman, California Institute of Technology

**Practicum Year:**1997

**Practicum Supervisor:**Dr. John Reynders, , Advanced Computing Laboratory, Los Alamos National Laboratory

I have been developing and using magnetohydrodynamic model and implementing the code based on this model. The project involved extensive use of POOMA (Parallel Object-Oriented Methods and Applications) Framework.

Long-time Simulations of Large-Aspect-Ratio Reaction-Diffusion Systems

Scott Zoldi, Duke University

**Practicum Year:**1997

**Practicum Supervisor:**Dr. John Pearson, , Computational Methods, Los Alamos National Laboratory

This study looked at the difficult computational problem of studying the dynamics of reacting chemicals in parameter regimes where over a long time the dynamics asymptotes to characteristic states and patterns. I developed efficient sparse solvers based on GRMES to gain both stability in the numerical algorithm and to achieve the long integration times as compared to explicit or time-splitting methods.

Dynamic Fracture Investigations in 2D Brittle Amorphous Systems via Massively Parallel Computation

Michael Falk, University of California, Santa Barbara

**Practicum Year:**1996

**Practicum Supervisor:**Dr. Peter Lomdahl, , Theoretical Division, Los Alamos National Laboratory

I carried out preliminary work to simulate brittle fracture in amorphous materials including calculator of accurate stress values.

Visualization of Large Scale Models Using the SCIRun Environment SCIRun

Steven Parker, University of Utah

**Practicum Year:**1996

**Practicum Supervisor:**Dr. Chuck Hansen, , Advanced Computing Laboratory, Los Alamos National Laboratory

Visualization of Large Scale Models Using the SCIRun Environment SCIRun, a computational modeling, simulation and visualization environment which I am developing at the University of Utah, was applied to new problems of interest to LANL. I used SCIRun to visualize the vector fields in a large scale simulation of global ocean currents.

Effects of Constraints on Transition Rates in a Model System

James Phillips, University of Illinois at Urbana-Champaign

**Practicum Year:**1996

**Practicum Supervisor:**Dr. Niels Gronbech-Jensen, , Theoretical Division, Los Alamos National Laboratory

To study the possible effects of highly constrained (torsion-angle) dynamics on simulations of proteins, a one dimensional periodic lattice of unit masses linked by harmonic and bistable springs was modeled. In this system, the number of degrees of freedom could be varied while maintaining a constant rigidity. It was found that the transition rate varied monotonically with the number of degrees of freedom in the system. The degree of coupling to the Langevin dynamics heat bath had little effect on the transition rate, but the addition of local degrees of freedom to a constrained system was effective in restoring transition rates.

Object Oriented Software for PDEs using Overlapping Grids and Serial and Parallel Array Classes for Scientific Computation in C++

Scott Stanley, University of California, San Diego

**Practicum Year:**1996

**Practicum Supervisor:**Dr. David Brown, , Scientific Computing, Information and Communicatio, Los Alamos National Laboratory

This project has concentrated on the development of C++ class libraries which can be used in the development of program sin C++ to solve partial differential equations on structured grids in complicated domains. The two main portions of this work are on the two separate libraries, A++/P++ and Overture. A++/P++ is an array class library for C++, while Overture is a set of class libraries for the development of overlapping grid PDE solvers.

The Dynamics of Collapse of a Cavitation Bubble near a Boundary

Gordon Hogenson, University of Washington

**Practicum Year:**1995

**Practicum Supervisor:**Dr. Gary Doolan, , Complex Systems Group, Los Alamos National Laboratory

The goal of my practicum work was to simulate, using the Lattice Boltzmann method, the dynamics of collapse of a nonequilibrium vapor bubble near a solid surface. Such a collapse produces a high pressure jet which impinges upon the surface with high velocity, and the cause of costly 'cavitation damage' to submarine propeller blades. We used a variant of the Lattice Boltzmann method which reproduces a fluid with a non-ideal gas equation of state, and so is capable of reproducing the liquid-vapor phase transition. This study constitutes the first simulation of a collapsing bubble which treats the liquid and gas phases implicitly, as opposed to conventional fluid dynamics, in which the liquid-vapor boundary is treated explicitly via a front-tracking algorithm.

Gyrokinetic Plasma Simulations using Object Oriented ProgrammingGyrokinetic theory

Edward Chao, Princeton University

**Practicum Year:**1994

**Practicum Supervisor:**Dr. John Reynders, , Advanced Computing Laboratory, Los Alamos National Laboratory

Gyrokinetic Plasma Simulations using Object Oriented ProgrammingGyrokinetic theory establishes a basis for analyzing plasma behavior when one is interested in the effects of high wave number perturbations on the plasma but one is not interested in high frequency phenomena. Computer simulations have successfully utilized the theory. However, the complexity of today's plasma simulation codes and the frequency of computer hardware improvements makes the maintenance of these codes difficult. This is the motivation for implementing the object oriented programming paradigm in current gyrokinetic plasma simulations.

Grand Challenge Molecular Dynamics Researchers in T-11 (Statistical Physics and Condensed Matter Theory)

Timothy Germann, Harvard University

**Practicum Year:**1994

**Practicum Supervisor:**Dr. Richard LeSar, , Theoretical Division/Center for Materials Science, Los Alamos National Laboratory

Grand Challenge Molecular Dynamics Researchers in T-11 (Statistical Physics and Condensed Matter Theory) have developed a massively parallel molecular dynamics code capable of simulating up to 600 million atoms. (Computers in Physics, Jul/Aug 1993 cover and pp. 382-3) This can be used to model materials, e.g. fracture dynamics.

Topological Characterization of the Strange Attractor of Low-Dimensional Chaotic Systems.

Pete Wyckoff, Massachusetts Institute of Technology

**Practicum Year:**1994

**Practicum Supervisor:**Dr. Nick Tufillaro, , Center for Nonlinear Studies, Los Alamos National Laboratory

Consists of searching for various invariants in the system and relating them to parameter changes.

Composable Finite Difference Algorithms for Vector Operators

Mark DiBattista, Columbia University

**Practicum Year:**1993

**Practicum Supervisor:**Dr. Mac Hyman, , Center for Nonlinear Studies, Los Alamos National Laboratory

Finite difference approximants to vector operators grad, curl, and div generally lose desirable 'mimetic' properties when composed to create higher order operators. Borrowing some ideas from the more sophisticated finite volume theory, a consistent set of algorithms is devised which temporarily maps values to grid elements and faces, and when composed, preserve those previously lost properties.

The Numerical Tokamak Grand Challenge

William Humphrey, University of Illinois at Urbana-Champaign

**Practicum Year:**1993

**Practicum Supervisor:**Dr. John Reynders, , Advanced Computing Laboratory, Los Alamos National Laboratory

The Numerical Tokamak Grand Challenge Designed and implemented an object-oriented particle-in-cell class library which ran on a variety of distributed platforms.

Long-Time Models for Ocean Circulation Oceans

David Ropp, University of Arizona

**Practicum Year:**1993

**Practicum Supervisor:**Dr. Mac Hyman, , Center for Nonlinear Studies, Los Alamos National Laboratory

Long-Time Models for Ocean Circulation Oceans exhibit behavior on a wide range of both spatial and temporal scales. New models of ocean circulation have focused on capturing the long-time dynamics of the system, while also resolving the spatial scales containing much of the system's energy. The goal is to have a model that accurately gives the general circulation patterns that could be incorporated into global climate modelsor be used to start up more detailed models.

Spectral Shallow Water Equations Modelling: Comparisons between Serial and Parallel Computing

Eric Williford, Florida State University

**Practicum Year:**1993

**Practicum Supervisor:**Dr. Robert Malone, , Advanced Computing Laboratory, Los Alamos National Laboratory

A spectral shallow water model was tested on various machines, including LANL's CM-5. The serial version of the code was adapted to the parallel environment.