Los Alamos National Laboratory


Snow on Sea Ice in the ACME climate model
Kelly Kochanski, University of Colorado
Practicum Year: 2017
Practicum Supervisor: Elizabeth Hunke, Deputy Group Leader, Theoretical Division (T-3), Los Alamos National Laboratory
The ACME climate model is the Department of Energy's next-generation Earth System Model. I developed the snow thermodynamics of MPAS-seaice, the sea ice component of ACME, which is based on the Los Alamos Sea Ice Model (CICE) and widely used in Earth System models and shipping forecasts.
Accelerating molecular simulations of lipid bilayers
Sean Marks, University of Pennsylvania
Practicum Year: 2017
Practicum Supervisor: Angel Garcia, Director, Center for Nonlinear Studies (CNLS), Los Alamos National Laboratory
Under the direction of Dr. Angel Garcia of the CNLS at Los Alamos National Laboratory (LANL), I studied a new molecular dynamics (MD) method for enhancing simulations of lipid bilayers. Such systems are of great interest in the physics of cell signaling, but possess exceptionally long time scales and are therefore very challenging to properly study. By applying the method of Replica Exchange with Solute Tempering (REST), we were able to converge our system’s statistics by roughly an order of magnitude faster than conventional MD.
Speeding up the scientific process at experimental x-ray facilities through the use of Gaussian process emulators
Kelly Moran, Duke University
Practicum Year: 2017
Practicum Supervisor: Earl Lawrence, Scientist, Computer, Computational, and Statistical Sciences , Los Alamos National Laboratory
I worked on an ongoing LDRD project over the summer. The goal of this LDRD project is developing the capability to accelerate knowledge and discovery from experimental scientific facilities in the context of dynamic compression experiments. These dynamic compression experiments consist of a multi-dimensional input parameter space (some of which is estimated, some of which is set by the experimenter) leading to a multi-dimensional output space. Inputs the experimenter sets include such parameters as time delay of X-ray probe pulse and angle of X-rays relative to shock. Those that must be estimated include shock pressure, material strength and crystal orientations. The measured outputs include velocimetry, diffraction, and imaging. The statistical component of the project focuses on improving experimental uncertainty via pre-built Gaussian process emulators that can be used quickly in later analyses. It is hoped that emulation can facilitate accurate experiment calibration, i.e. determine the distribution of physics parameters that best match the data. My work was on incorporating both distributional and Markov-Chain Monte Carlo (MCMC) uncertainty into the prebuilt emulator and parallelizing the code. I also compared the performance of the Metropolis-Hastings algorithm and Hamiltonian Monte Carlo for parameter estimation in this context.
Extending Accelerated MD methods to soft-matter systems
Laura Watkins, University of Chicago
Practicum Year: 2017
Practicum Supervisor: Arthur Voter, Dr., Theoretical , Los Alamos National Laboratory
AMD is a group of methods aimed at simulating systems at long timescales not attainable with regular MD. These methods are fairly well developed for hard material systems, but applying them to softer systems (such as proteins) is much more difficult and has not been solved. I worked on developing AMD methods to work for such systems--specifically, I focused on how to define kinetic states for these flexible systems.
Stability of pick-up ion distributions in the outer heliosheath
Kathleen Weichman, University of Texas
Practicum Year: 2017
Practicum Supervisor: Gian Luca Delzanno, , T-5 Applied Mathematics and Plasma Physics, Los Alamos National Laboratory
The IBEX ribbon, a bright streak of energetic neutral atom emission observed by the IBEX spacecraft, is believed to be caused by pick-up ions (PUIs) in the outer heliosheath experiencing the local interstellar magnetic field (LISM). These pick-up ions originate as fast solar wind neutrals and undergo charge in the outer heliosheath, launching them into a helical orbit along the magnetic field lines. If the direction of the solar wind neutral is perpendicular to the LISM, the orbit is circular rather than helical and, provided the PUI distribution is not destroyed by instabilities, another eventual charge exchange may take it back towards Earth as an energetic neutral atom, where it can be collected by IBEX. While this is the generally accepted explanation of the IBEX ribbon, the survival of PUI distributions in the outer heliosheath is called into question by a simple linear stability analysis. Traditional particle-in-cell (PIC) based simulation methods have thus far been unable to capture PUI dynamics over the 2 year charge exchange time due to the necessity of resolving the short (40 s or sub-second) ion or electron scales. The goal of my practicum project was to apply a new simulation tool, Spectral Plasma Solver (SPS), to the PUI stability problem in the hopes of making a definitive statement about the proposed origin of the IBEX ribbon. Because SPS is an implicit spectral Vlasov method, it has the advantages over PIC methods of being free from statistical noise and able to step over time scales. During my practicum, I successfully simulated realistic pick-up ion distributions while stepping over electron time scales by a factor of 200,000, a first for this problem.
Code Interfacing for Practical Implementation of the Coupled Wavepackets Algorithm for Nonadiabatic Dynamics
Morgan Hammer, University of Illinois at Urbana-Champaign
Practicum Year: 2016
Practicum Supervisor: Sergei Tretiak, Technical Staff Member, Theoretical Division and CINT, Los Alamos National Laboratory
The goal of this project was to interface two in-house codes produced within the Tretiak group in order to allow the recently developed coupled wavepackets algorithm to be used to study molecular systems. Previously, the algorithm has only been used to study model systems.
Optimization of Parameterizations for Density Functional Tight Binding Theory using Machine Learning
Aditi Krishnapriyan, Stanford University
Practicum Year: 2016
Practicum Supervisor: Marc Cawkwell, Staff Scientist, Los Alamos Theoretical Division, Los Alamos National Laboratory
In this project, a novel, fully automated optimization package utilizing some machine learning techniques was used to optimize density functional based tight-binding (DFTB) parameters described by semi-empirical simplified functional forms. Essentially, the goal was to get close to the accuracy of density functional theory while maintaining the speed of calculations of tight-binding theory. This parameterization scheme is transferable and greatly reduces errors in atomization energy, molecular geometry, and molecular dipole moment upon optimization. The error is also minimized for initial parameters with up to 10% perturbation, displaying flexibility in choice of initial parameter predictions. This optimization package was applied to LATTE, a tight-binding code developed at LANL.
Cold atmospheric plasma-based electrostatic disruption of bacteria and cancer cells
Kathleen Weichman, University of Texas
Practicum Year: 2016
Practicum Supervisor: Gian Luca Delzanno, Research Scientist, T-5 Applied Mathematics and Plasma Physics, Los Alamos National Laboratory
The search for novel bacterial disinfection and cancer treatment techniques has resulted in a new application for cold atmospheric plasma (CAP) devices at the intersection of plasma physics and medicine. CAP exposure has been successfully used to destroy bacteria and selectively kill cancer cells in vitro and in vivo, but the theoretical underpinning has neglected a full discussion of plasma physics effects related to the experimental parameter regime. My practicum project was to bring a discussion of plasma charging in collisional plasmas to the field of plasma medicine. Specifically, previously neglected plasma capacitance effects lower the threshold for electrostatic disruption of bacteria and render possible the selective disruption of cancer cells under direct plasma exposure.
Power System Estimation
Thomas Catanach, California Institute of Technology
Practicum Year: 2014
Practicum Supervisor: Russell Bent, Staff Scientist, Energy and Infrastructure Analysis, Los Alamos National Laboratory
Developing methods for state estimation and system identification are essential for increasing the reliability of the power grid since it is becoming increasingly complex and subject to more disturbances. Typically this problem has been solved on steady state time scales however the dynamics are becoming more important to power systems necessitating quicker estimation. With the deployment of phasor measurement units (PMUs) throughout the system this fast estimation is now possible. To do this fast estimation a layered learning architecture is essential that integrates state estimation, change point detection, and classification of disturbances. By thinking of these estimation algorithms and the controls as a layered system it improves our ability to design optimal architectures that are both fast and flexible. State Estimation can be achieved using Kalman filtering and particle filtering based techniques which assume a system topology and dynamics model. These techniques are adapted to the differential algebraic equations that describe the power system and their robustness to noise estimates and the number of PMUs is explored. Using the estimates from these filters we can make forward prediction of the future system which can then be compared to the actual PMU data to identify large unexpected deviations. These change points then trigger a topology change classifier to identify the new topology of the system after a failure such as a line loss.
Chance constrained optimal power flow
Miles Lubin, Massachusetts Institute of Technology
Practicum Year: 2014
Practicum Supervisor: Russell Bent, Staff Scientist, Energy and Infrastructure Analysis Group, Los Alamos National Laboratory
During the practicum, I worked with researchers at LANL, fellow summer students, and a professor at Columbia on developing, implementing, and evaluating a model for integrating highly variable renewable energy from wind into a power-grid control problem called optimal power flow, which sets generation levels to match demand on a short term scale. Treating deviations from wind generation forecasts as a random variable, we introduced so-called chance constraints into the optimization problem using a model that remained practically tractable. In a realistic computational study, we found that the model had tangible operational benefits in terms of reducing costs and real-time corrective actions.
Task-based dictionary learning using neural networks
Britni Crocker, Massachusetts Institute of Technology
Practicum Year: 2013
Practicum Supervisor: Garrett Kenyon, , Physics, Los Alamos National Laboratory
We used a neural-network-based model for sparse coding with Hebbian connections to learn a task-based dictionary for both image reconstruction and image categorization. Previous efforts in this area have built such dictionaries with greedy algorithms or by creating many one-vs-all dictionaries for each category. Our approach was to train all layers of the neural network simultaneously, with one dictionary to separate all categories; this way, our algorithm is easily implementable in hardware and scales with the number of categories.
Linear-Multi-Frequency-Grey Preconditioning for Radiative Transfer Sn Calculations
Andrew Till, Texas A&M University
Practicum Year: 2013
Practicum Supervisor: Jim Warsa, , Computational Physics and Methods Group (CCS-2), Los Alamos National Laboratory
I worked on a neutral-particle physics code at the lab, implementing an acceleration scheme to reduce the number of iterations required for convergence. I compared two possible formulations of a method for efficiency. For those with a nuclear engineering background, we were working in Capsaicin, investigating linear multifrequency gray (LMFG) preconditioning on the radiation transport equation applied to thermal photons. We investigated using a scalar flux or an absorption rate density as our primary unknown. The advantage of using the former is that scattering can be accounted for cheaply; the advantage of using the latter is that the vector sizes are smaller, which ought to lead to faster computation. We found the difference in vector size had a negligible effect but the ability to do scattering without inner iterations had a strong effect on iteration count and on the time to solution.
Electronic descriptors for the prediction of photovoltaic properties of polymers
Jarrod McClean, Harvard University
Practicum Year: 2012
Practicum Supervisor: Sergei Tretiak, Staff Scientist, Theoretical Division Group T-1/CINT, Los Alamos National Laboratory
The project involved taking a set of molecules, whose properties are known experimentally, and attempting to predict the open circuit voltage which results from a bulk-heterojunction photovoltaic built from a polymer of that molecule and PCBM. We wished to build a set of electronic descriptors which could be used to predict the performance of certain materials before they are manufactured. These electronic descriptors were derived from ab initio quantum chemistry calculations.
Using Chemical and Structural Features to Predict Transcription Factor Binding Sites
Mark Maienschein-Cline, University of Chicago
Practicum Year: 2011
Practicum Supervisor: Bill Hlavacek, , Center for Nonlinear Studies, Los Alamos National Laboratory
My project aimed to use known transcription factor binding sites in DNA to predict other sites. For many transcription factors, a small number (ranging from a handful to several dozen) of binding sites are known from direct experimental evidence. Many methods exist that use the DNA letter sequences of these binding sites to construct a position weight matrix (PWM), which is then used to predict binding sites. However, transcription factors and DNA are molecules, so their interaction is governed by the local shape and electrostatics of the DNA, not by the DNA letter sequence. Our goal was to summarize these interactions by computing structural and chemical features of DNA and DNA-transcription factor complexes, and use these features to train a support vector machine (SVM) to classify (predict) other potential binding sites. We obtained a significant improvement over the usual PWM methods, which we can attribute to both the SVM algorithm used, as well as the specific chemical and structural features we calculated.
Dynamics of the Quantum Phase Transition in the Mixed Field Ising Model
Norman Yao, Harvard University
Practicum Year: 2011
Practicum Supervisor: Wojciech Zurek, Laboratory Fellow, Theory Division (T-4), Los Alamos National Laboratory
The transverse field Ising model (TFIM) is one of the paragons of a quantum phase transition; when the coupling and field strength are equivalent, it exhibits a transition between a ferromagnetic and paramagnetic state. Amazingly, this complex model of elementary spins can actually be solved exactly by mapping the problem onto that of non-interacting fermions via the Jordan-Wigner transformation. However, once a longitudinal field is turned on, the model is generally no longer exactly solvable - except at the TFIM critical point. The solution at this critical point was developed by Zamoldochnikov and involves the mapping to an Ising field theory. Recent experiments by Coldea et al. have claimed to demonstrate a remarkable prediction of the field theory, namely that the lowest energy eigenstates are governed by the E8 Lie algebra. In my project, rather than examining static energies, we are examining the dynamics of the mixed field transition, in the hope that the emergent E8 symmetry will leave artifacts in a quench experiment.
Ice Sheet Model Integration
Tobin Isaac, University of Texas
Practicum Year: 2010
Practicum Supervisor: William Lipscomb, , Computational Fluid Dynamics (T-3), Los Alamos National Laboratory
For the first time ever, climate models are coming online which include dynamic ice sheets that interact with the ocean and atmosphere. At the same time, a plethora of newer, more sophisticated models of ice sheet dynamics are being designed by researchers around the world. The project was to create a common interface for these models with the Community Ice Sheet Model (CISM), which is the ice component of the Community Earth Systems Model (CESM). Such an interface allows modelers to take advantage of realistic forcing from CISM, and also allows CISM to seamlessly integrate advances as they occur.
cl.egans: A high-performance spiking neural network simulation package
Cyrus Omar, Carnegie Mellon University
Practicum Year: 2010
Practicum Supervisor: Garrett Kenyon, Staff Scientist, Physics, Los Alamos National Laboratory
cl.egans is an OpenCL-accelerated, Python-based neurobiological circuit simulation package which I developed over the summer, concurrently with the development of anew programming language for OpenCL programming called cl.oquence. This language incorporated a novel static, structural type system with automatic type inference from within a parent dynamic language. This setup was leveraged to produce a novel extensible type system which merged the concepts of LISP-style macros and metaobject protocols. Using these features, cl.egans operated as a tree-based simulation construction language, and included features such as automatic replication for simulations which require multiple realizations of a single network and several analysis tools.
Velvetrope: an algorithm for rapidly finding local alignments between a sequence of interest (SOI) and multiple test sequences.
Scott Clark, Cornell University
Practicum Year: 2009
Practicum Supervisor: Nick Hengartner, Group Leader, Discrete Simulation Sciences (CCS-5), Los Alamos National Laboratory
We developed an algorithm that rapidly finds local alignments within genetic sequences. It uses a novel bit-shift algorithm that allows it to find areas of highly probable local alignment irregardless of positioning within a sequence. It can be used to find subsequences of interest from within a larger sequence compared to others or discover new highly conserved binding regions. One of the main advances is the speed which is orders of magnitude faster than the current standard multiple sequence alignment algorithms.
Comparative Monte Carlo Efficiency by Monte Carlo Analysis
Brenda Rubenstein, Columbia University
Practicum Year: 2009
Practicum Supervisor: Dr. James Gubernatis, , Theoretical, T-4, Los Alamos National Laboratory
The acceptance ratio has long been a trusted a rule of thumb for characterizing the performance of Monte Carlo algorithms. But, is this trust entirely merited? In this work, we illustrated that the second eigenvalue of a Markov Chain Monte Carlo algorithm's transition matrix is more indicative of the algorithm's underlying convergence than is an acceptance ratio. By monitoring the second eigenvalue of the Metropolis and Multiple-Site Heat Bath algorithms as applied to the one and two dimensional Ising models and that of the Metropolis algorithm as applied to a series of coupled oscillators with infinite numbers of transition matrix elements, we found that the second eigenvalue is better able to capture convergence behavior that is temperature-independent. Furthermore, trends in the second eigenvalue suggested that the Metropolis algorithm converges faster than Multiple-Site Heat Bath algorithms and that the convergence of all algorithms slows as system sizes grow. The second eigenvalue was computed for small systems sizes via standard matrix diagonalization methods as well as a deterministic modified power method. For system sizes whose subdominant eigenvalues could not be obtained deterministically without excessive computational expense, we employed a novel Monte Carlo version of the modified power method. This new approach becomes of paramount importance in the study of chained oscillators, as it represents the simplest algorithm currently available for calculating the second eigenvalues of systems with a continuous phase space. Our work outlined new approaches for characterizing the performance of Monte Carlo algorithms and determining the second eigenvalue of a very general class of matrices and kernels that can be applied throughout the physical sciences.
Negative Flux Fixups for Discontinuous Finite Element SN Transport
Steven Hamilton, Emory University
Practicum Year: 2008
Practicum Supervisor: James Warsa, , Computer, Computational and Statistical Sciences, Los Alamos National Laboratory
My practicum project involved developing numerical algorithms to remedy the occurrence of negative solutions which arise in solving the radiation transport equation. The true solution to the radiation transport equation is always positive, and so artificial negative solutions are extremely undesirable as they can lead to instabilities in various solution strategies. By adding a non-linear "fixup" to an existing transport solver, the goal is to produce an output which satisfies known physical properties of the true solution.
Wavelet Transform techniques in Multigrid and Asynchronous Fast Adaptive Composite (AFAC) algorithms
Zlatan Aksamija, University of Illinois at Urbana-Champaign
Practicum Year: 2007
Practicum Supervisor: Bobby Philip, Technical Staff Member, T7 Theory, Simulation, and Computation Directorate, Los Alamos National Laboratory
This project focused on using the wavelet transform techniques to decouple coarse and fine components of a solution as part of coarsen and refine operators in multigrid and composite grid solvers. The Wavelet transform has advantages in terms of flexibility, computational efficiency, and power to resolve different scales of a solution which allow it to be used effectively in multigrid-based algorithms. We were able to show that the perfect reconstruction properties of the wavelet transform make it possible to accomplish asynchronous algorithms with excellent convergence properties. This is especially useful for large-scale parallel solvers since various scales of a problem are effectively decoupled and can be iterated on independently and in parallel.
Comparison of a Rule-Based and a Traditional Pathway Model of a Signal Transduction System
Jordan Atlas, Cornell University
Practicum Year: 2007
Practicum Supervisor: James Faeder, Technical Staff Member, Theoretical Biology and Biophysics Group (T-10), Los Alamos National Laboratory
For the summer practicum I investigated parameter estimation for rule-based models of biological systems with James Faeder at Los Alamos National Laboratory. A rule-based model is one where a set of generalized reactions (i.e. rules) specify the features of proteins that are required for or affected by a particular protein-protein interaction. Parameter estimation studies in rule-based models are important because it is unclear to what degree the predictions of rule-based models can be constrained by experimental data. Therefore, a better understanding of these models and their parameter sensitivity could lead to better predictions in models of complex biological networks. Dr. Faeder's group has developed the BioNetGen software for generating sets of chemical species and reactions from sets of rules. The overall goal of this project was to examine the extent to which parameter estimates for rule-based models can be refined based on qualitative observations. In particular, we would like to determine what kinds of information have the largest effect on reducing the size of the feasible parameter space, by which we mean the range of parameters over which the model predictions remain consistent with the data, and the magnitude of the uncertainty in the model predictions.
Adaptive Mesh Refinement for Modeling Magneto-Hydrodynamic Plasmas
Mark Berrill, Colorado State University
Practicum Year: 2007
Practicum Supervisor: Bobby Philip, Technical Staff Member, T-7, Mathematical Modelling and Analysis, Los Alamos National Laboratory
We worked on modifying a magneto-hydrodynamic code called pixie3d to include adaptive mesh refinement. Pixie3d is a plasma code that is used to model several phenomena, including magnetic reconnection in tokamaks. Because of the difference in length scales between the feature size of the current sheets (which must be resolved) and the size of the plasma, it is impossible to use a single fixed grid to cover the entire domain in 3D. The project involved merging the code pixie3d with a software package called SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) to allow for different grid resolutions over different parts of the domain. Additionally, these resolutions can be changed and adapted as the problem evolves.
Multilevel upscaling for multiphase porous flow.
Ethan Coon, Columbia University
Practicum Year: 2006
Practicum Supervisor: David Moulton, Staff Researcher, T-7 Mathematical Modeling and Analysis, Los Alamos National Laboratory
Many geophysical applications, including porous flow, require the resolution of fine scale features and parameters on coarse scale models. Simply averaging out the fine scale often loses important information about small scale features such as interfaces that greatly change the global dynamics. Therefore, we have worked to derive and apply upscaling methods that more accurately represent the effects of fine scale data on coarse scale simulations.
Modeling genetic regulation as a highly canalized boolean network
Jeffrey Drocco, Princeton University
Practicum Year: 2006
Practicum Supervisor: Cynthia Reichhardt, , T-12, Los Alamos National Laboratory
This project seeks to understand in a very basic way how organisms balance stability of phenotype with genetic variation by modeling genes as binary switches which interact via boolean functions. Theoretical studies suggest that any network which can plausibly model this phenomenon are of the highly canalized type, but few further details are known.
Parameter Estimation in a Kinetic Model of the marRAB Operon in Eschericia coli
David Markowitz, Princeton University
Practicum Year: 2006
Practicum Supervisor: Michael Wall, Team Leader, Computer and Computational Sciences & Bioscience, Los Alamos National Laboratory
The objective of this project was to build a kinetic model of an activatable operon, marRAB, in the E. coli bacterium. We explored the relationship between free parameters in this model and their effects on transcriptional output. By matching simulated expression profiles to experimental data, we were able to constrain free parameters and make experimentally testable predictions for previously unknown equilibrium constants in this system.
Accurate and Robust Monte Carlo-Diffusion Interfaces
Gregory Davidson, University of Michigan
Practicum Year: 2005
Practicum Supervisor: Jeff Densmore, Staff Scientist, CCS4, Los Alamos National Laboratory
Monte Carlo is a technique used for solving the radiative transfer equations computationally. The diffusion equation is an approximation to the radiative transfer equation that is accurate in certain (diffusive) regimes. Discrete Diffusion Monte Carlo is a computational technique whereby a discrete diffusion equation is solved using a particle-based Monte Carlo technique in those regimes where the diffusion approximation is accurate, and the radiative transfer equation using traditional Monte Carlo is used elsewhere. This project was concerned with accurately interfacing the Monte Carlo and the discrete diffusion domains. First, we investigated an emissivity-preserving interface. Emissivity must be preserved to ensure that radiation penetrates into diffusive regions accurately. We derived an emissivity-preserving scheme that correctly allowed radiation to penetrate into diffusive regions. Secondly, we investigated asymptotically-correct angular distributions for diffusion particles leaking out of diffusive regions as well as Monte Carlo particles that are not allowed to penetrate into diffusive regions. Previous methods always used an isotropic angular distribution, which is generally not correct.
A Hybrid Monte Carlo-Deterministic Transport Method for Efficient Global Transport Solutions
Allan Wollaber, University of Michigan
Practicum Year: 2005
Practicum Supervisor: Todd Urbatsch, Dr., Computer and Computational Sciences (CCS-4), Los Alamos National Laboratory
We introduce a new hybrid transport method for solving global neutral particle problems. In the method, one generates an estimate of the global solution using an inexpensive deterministic method and calculates the multiplicative correction to this solution using known Monte Carlo techniques. We demonstrate the method on 1D time dependent and steady state neutron transport problems, and show that it is very competitive for problems in which there are large gradients in the flux (for example, wavefronts and deep penetration problems).
Geometric Monodromy & Variational Integrators
Nawaf Bou-Rabee, California Institute of Technology
Practicum Year: 2004
Practicum Supervisor: Darryl D. Holm, Lab fellow, Center for Nonlinear Studies T-7, Los Alamos National Laboratory
This summer involved extending ideas from recent progress in geometric monodromy and variational integration theory to answer fundamental questions on the global behavior of dynamical systems. Geometric monodromy is a powerful new way to look at the global phase space of a dynamical system (see I. Stewart's "Quantizing the classical cat", Nature 430: 731-732, [2004]). Darryl guided me through this research area this summer. Variational integration is a numerical technique that (to machine roundoff) discretely preserves symmetries and the symplectic structure of a dynamical system. We were primarily concerned with understanding the variational structure of some new integration methods that have excellent properties (see P. Krysl's "On Endowing an Explicit Time Integrator for Rotational Dynamics of Rigid Bodies with Conservation Properties" submitted to I. J. for Numerical Methods in Eng'g. [2004]).
Robustness in Genetic Circuits: Clustering of Functional Responses
Mary Dunlop, California Institute of Technology
Practicum Year: 2004
Practicum Supervisor: Michael Wall, Technical Staff Member, Computer and Computational Sciences, Los Alamos National Laboratory
We all know about DNA - the double helix that encodes genetic information - but how is that information processed, how is it used in the cell? The information encoded in a strand of DNA is copied and then translated into a protein that does something useful for the cell. For example, the protein may be an enzyme that breaks down sugars. Gene expression - whether proteins are made from the DNA or not - can be turned on and off in response to external and internal stimuli. Feedback and feed forward loops are used to regulate the gene expression process. These control elements ensure that genes can be expressed quickly and accurately in response to stimuli. There are certain characteristic patterns that occur over and over in genetic regulatory networks throughout different parts of the cell. Why are these network motifs so common? What is it about their structure that favors them over other network configurations? If we know the structure of a network, can we determine it's function?
Singular Solutions to a Partial Differential Equation for Computer Imaging
Samuel Stechmann, New York University
Practicum Year: 2004
Practicum Supervisor: Darryl Holm, Laboratory Fellow, T-7, Los Alamos National Laboratory
In computer imaging, a partial differential equation (PDE) called "EPDiff" arises in problems of deforming one image into another. The equation has been studied in Euclidean space, and some researchers have suggested that more complicated spaces could also be applicable for computer imaging problems. As a first step to understanding EPDiff on non-Euclidean spaces, we studied it on two simple non-Euclidean spaces: the sphere and hyperbolic space. The solutions we focused on were singular solutions which have a peak, giving them a jump in their first derivative and making them difficult to handle numerically.
Developing an Efficient Algorithm for Parallel MCNPX Kcode Calculations
Nathan Carstens, Massachusetts Institute of Technology
Practicum Year: 2003
Practicum Supervisor: Gregg McKinney, Technical Staff Member, MCNPX (D10), Los Alamos National Laboratory
My research at Los Alamos National Laboratory focused on improving the efficiency MCNPX parallel kcode calculation while exactly tracking the sequential code. MCNP is a large radiation transport code used by about 3,000 users probably making it the largest nuclear science code. While MCNP performs well in parallel source calculations, parallel kcode calculations where strongly limited by significant communication requirements during calculation. My new algorithm eliminated the vast majority of communication during kcode calculations allowing more efficient utilization of large parallel machines. Preliminary test results show an order of magnitude speedup for a 60 node cluster when comparing the new and old code. The new code will be incorporated into MCNPX as the default kcode algorithm in December 2003.
Development of an object-oriented, parallel, fully-implicit, finite-volume code for modeling multi-phase subsurface flows
Richard Mills, College of William and Mary
Practicum Year: 2003
Practicum Supervisor: Peter Lichtner, , EES-6, Los Alamos National Laboratory
The capability to model multi-phase, reactive subsurface flows in high resolution is important to many environmental missions of national interest. Effective models are necessary for such tasks as environmental remediation of contaminated sites or preventing contamination of important aquifers. I have worked with Peter Lichtner of Los Alamos National Lab to develop a parallel subsurface flow code, PFLOW, to interface with his existing parallel reactive transport code, PTRAN. Coupled together, these codes will be used to study subsurface reactive flow and transport problems at very high resolutions using parallel computers such as the 1024 processor QSC machine at LANL.
Numerical Modeling of Binary Solidification
Nathaniel Morgan, Georgia Institute of Technology
Practicum Year: 2003
Practicum Supervisor: Brian VanderHeyden, Dr., Theoretical, Los Alamos National Laboratory
My practicum research at Los Alamos National Laboratory focused on computational modeling of binary alloy solidification using a multi-field approach combined with finite-volume discretization methods. In binary alloy solidification some unique flow patterns exist; the physical cause is still unknown. The object of my research was to expand the capabilities of a new multi-physics code for the purpose of better understanding the fluid dynamics associated with binary alloy solidification.
Extension of the "Data Dependent Hypothesis Classes" framework to Regression Problems.
Michael Wu, University of California, Berkeley
Practicum Year: 2003
Practicum Supervisor: Don R. Hush, Dr., Group CCS-3, Los Alamos National Laboratory
This is a theoretical and mathematical practicum which involves proving fundamental theorems for uniform convergence of empirical risk to the expected risk over data dependent function classes. In traditional VC theory, structural risk minimization uses a function classes that are independent of data. Without specifying the hierarchy of nested hypothesis classes, a learning algorithm could spend much resources searching within hypothesis classes that do not contain a good approximation of the target function. Using data dependent function classes is a general method for incorporating our bias and prior knowledge obtain from the training data. The goal of this practicum is to prove a uniform law of large number over data dependent hypothesis class for regression problems. This establishes the existence of a consistent learning algorithm over data dependent hypothesis classes for regression, which can significantly reduce computational load and possibly give a much better confidence bound in the small sample limit.
SPH Code Validation and Addition of Particle Splitting
Marcelo Alvarez, University of Texas
Practicum Year: 2002
Practicum Supervisor: Michael Warren, Staff Member, T-6, Los Alamos National Laboratory
The smoothed particle hydrodynamics (SPH) method is a particle-based gridless Lagrangian method for simulating astrophysical flows. It is very versatile because it naturally allows for adaptive spatial resolution and is free from the complications imposed by solving the hydrodynamic equations on a grid. Recently, Mike Warren and Chris Fryer have begun an exciting collaboration in which they are applying the SPH method to the simulation of core-collapse supernovae, a very computationally demanding problem. This collaboration has already lead to the first fully three-dimensional simulation of such a supernova, giving new insight into the puzzle of how these stars explode and how they lead to the remnants we observe today. This could only be possible with the development of an efficient, parallel SPH code and access to some of the world's fastest computers. My practicum work consisted of getting to know this SPH code, understanding the algorithm behind it, testing it on problems with known solutions, and trying to improve it by adding new twists to the existing algorithm. In particular, I became involved in adaptive particle splitting, a technique which is similar in spirit to adaptive mesh refinement (AMR). In adaptive particle splitting, SPH particles are split in regions where more resolution is desired, allowing a significant increase in the dynamic range or resolution of the calculation, while only modestly increasing the computing time. In future work, I hope to apply the particle splitting method to problems ranging from supernova explosions to the formation of large-scale structure in the universe.
Method for modeling receptor-ligand interaction without a specified aggregation length.
Annette Evangelisti, University of New Mexico
Practicum Year: 2002
Practicum Supervisor: William S. Hlavacek, Technical Staff Member, Theoretical Biology and Biophysics, T-10, Los Alamos National Laboratory
In this project we developed a method for modeling receptor-ligand interaction that does not restrict the number of interactions or the length of aggregation. Here, this method is applied to a bi-valent receptor and bi-valent ligand but is easily extended to the multi-valent receptor and ligand case. The method utilizes several previously published algorithms to show that the problem is tractable.
Investigation of Excited States of fac-Rhenium Tris Carbonyl Complexes Through DFT and TDDFT calculations
Nouvelle Gebhart, University of New Mexico
Practicum Year: 2002
Practicum Supervisor: Jeff Hay, Staff Member/Laboratory Fellow, Theorectical Chemistry Group, T-12, Los Alamos National Laboratory
This project is a collaboration between physcial experimentalists and computational investigastions of six coordinate Rhenium tris carbonyl complexes. These complexes are being investigated for their potential use in LED devices. The lowest lying excited state of these complexes has been shown to exist in four different configurations: MLCT (metal to ligand charge transfer), LLCT (ligand to ligand charge transfer), sigma to pi* charge transfer, and redox seperated states. The lowest lying excited state is important becuase this will influence the non-radiative relaxation of the molecule to the ground state. This will inlfuence the viability of the molecule to be used for an LED. Currently we are invesigating how changing two of the ligation sites to the metal will influence this excited state.
Construction of Adaptive Mesh Transport Discretizations that Meet the Thick Diffusion Limit
Heath Hanshaw, University of Michigan
Practicum Year: 2002
Practicum Supervisor: Jim Morel, Transport Methods Group (CCS-4), Computer and Computational Sciences Division, Los Alamos National Laboratory
Radiation transport calculations are generally large in scale, and, when coupled to hydrodynamics calculations, may constitute the overwhelming majority of computational time. Currently, the most effective transport discretization scheme is a discontinuous finite element method (DFEM) developed over the past ten years that meets the thick diffusion limit and can be accelerated with a diffusion preconditioner. However, this scheme does not couple well with hydrodynamics meshes, and in particular, has not been successfully adapted to work on a Cartesian adaptive mesh (and still meet the thick diffusion limit). The goal of this project is to develop a transport discretization scheme that is "simpler" than the DFEM scheme so that it can work on an adaptive mesh, but that still meets the thick diffusion limit and can be effectively accelerated with a diffusion preconditioner.
Quasi-chemical Approximation Applied to an Oil/Water/Surfactant System.
Joyce Noah-Vanhoucke, Stanford University
Practicum Year: 2002
Practicum Supervisor: Lawrence R. Pratt, Technical Staff Member, T-12: Theoretical Chemistry and Molecular Physics, Los Alamos National Laboratory
Using the quasi-chemical approximation, we investigated a system of surfactant chains in a solution of oil and water units. The system was modeled as a 2-dimensional Ising system on a lattice. The goal of the project was to come up with a simple theory to obtain thermodynamic information about the system, and to generate a phase diagram of the system.
Coarse Grained Models of Deformation and Phase Transitions
Ryan Elliott, University of Michigan
Practicum Year: 2001
Practicum Supervisor: Avadh Saxena, Staff Member, T-11 Condensed Mater and Statistical Physics, Los Alamos National Laboratory
Parallel Reactive Transport Modeling Using the PETSc Library
Glenn Hammond, University of Illinois at Urbana-Champaign
Practicum Year: 2000
Practicum Supervisor: Peter Lichtner, Staff Scientist, Earth and Environmental Science Division (EES-6), Los Alamos National Laboratory
During my practicum, I developed a fully implicit, reactive transport model using parallel data structures and functions/subroutines from the PETSc (Portable, Extensible Toolkit for Scientific Computation) library developed at Argonne National Laboratory (http://www.mcs.anl.gov/petsc). The purpose of this research was two-fold: (1) to become familiar with PETSc data structures and functionality and (2) to experiment with reactive transport in a parallel-computation environment. I will use the experience gained during the practicum to parallelize the existing reactive transport code, FLOTRAN, developed by Peter Lichtner at Los Alamos National Laboratory.
Modeling of HIV Quasispecies Dynamics and Treatment Strategies
Lee Worden, Princeton University
Practicum Year: 2000
Practicum Supervisor: Alan Perelson, Group Leader, Theoretical Biology and Biophysics, Los Alamos National Laboratory
Using computer models in conjunction with HIV sequence and drug resistance data, to create estimates of HIV quasispecies structure and predict the effects of treatment strategies on HIV population dynamics and evolution.
Multigrid on an Irregular Domain
Jon Wilkening, University of California, Berkeley
Practicum Year: 1999
Practicum Supervisor: Pieter Swart, , , Los Alamos National Laboratory
We developed a multigrid approach to solving elliptic boundary value problems using a uniform grid; the region of interest is multiply-connected, and the boundary does not line up with grid points.
Geometrically Conforming Weight Functions in MLSPH Moving Least Squares Particle Hydrodynamics (MLSPH)
John Dolbow, Northwestern University
Practicum Year: 1998
Practicum Supervisor: Dr. Gary Diltz, , Hydrodynamics Methods, Los Alamos National Laboratory
Geometrically Conforming Weight Functions in MLSPH Moving Least Squares Particle Hydrodynamics (MLSPH) is a new meshless Lagrangian method adapted from SPH in the XHM group at LANL. One of the central problems with particle codes which model problems in continuum mechanics arises from the geometric representation leads to numerical fracture in which the simulation breaks apart without the presence of a physical fracture model. The representation also prevents the methods from efficiently modeling thin plates and shells. My project involved modeling the particles as arbitrary shapes, and then conforming these shapes to the material deformation. I incorporated the new technique into the existing MLSPH particle code in XHM, and then demonstrated the improvements with several example problems.
AMRH
Matthew Farthing, University of North Carolina
Practicum Year: 1998
Practicum Supervisor: Dr. Gary Diltz, , , Los Alamos National Laboratory
AMRH is an object oriented adaptive mesh refinement class library. It is designed to work both in serial and parallel environments. Currently, it provides a data parallel model, but work is being done to include task parallelism.
Integrable vs. Nonintegrable Geodesic Soliton Behavior
Oliver Fringer, Stanford University
Practicum Year: 1998
Practicum Supervisor: Dr. Darryl Holm, , , Los Alamos National Laboratory
Pseudospectral solution to the family of pdes which admit delta functions as their solutions, which are solitons when convolved with the desired Green's function shape. Integrable or not, the equations support elastic collisions and sorting by height among the desired "pulson" shapes. These pulsons exist on an invariant manifold whose composition is initialized with an arbitrary initial condition.
Development of a Code for the P-1 Equations of Radiation Hydrodynamics Currently
Jeffrey Hittinger, University of Michigan
Practicum Year: 1997
Practicum Supervisor: Dr. Robert Lowrie, , Scientific Computing Group, Los Alamos National Laboratory
Development of a Code for the P-1 Equations of Radiation Hydrodynamics Currently, many simulations in radiation hydrodynamics use the simple radiation-diffusion model. A less simplified, hyperbolic model for radiation hydrodynamics is the P-1 system of equations. It is of interest to develop a simulation code based on the latter model, but numerically, this is challenging since the P-1 system can have very disparate wave speeds and very stiff source terms. The project was to develop the necessary algorithms for the P-1 system and to implement them for realistic 2D and 3D simulations.
Magnetohydrodynamic Modelling using PPMA Framework
Mayya Tokman, California Institute of Technology
Practicum Year: 1997
Practicum Supervisor: Dr. John Reynders, , Advanced Computing Laboratory, Los Alamos National Laboratory
I have been developing and using magnetohydrodynamic model and implementing the code based on this model. The project involved extensive use of POOMA (Parallel Object-Oriented Methods and Applications) Framework.
Long-time Simulations of Large-Aspect-Ratio Reaction-Diffusion Systems
Scott Zoldi, Duke University
Practicum Year: 1997
Practicum Supervisor: Dr. John Pearson, , Computational Methods, Los Alamos National Laboratory
This study looked at the difficult computational problem of studying the dynamics of reacting chemicals in parameter regimes where over a long time the dynamics asymptotes to characteristic states and patterns. I developed efficient sparse solvers based on GRMES to gain both stability in the numerical algorithm and to achieve the long integration times as compared to explicit or time-splitting methods.
Dynamic Fracture Investigations in 2D Brittle Amorphous Systems via Massively Parallel Computation
Michael Falk, University of California, Santa Barbara
Practicum Year: 1996
Practicum Supervisor: Dr. Peter Lomdahl, , Theoretical Division, Los Alamos National Laboratory
I carried out preliminary work to simulate brittle fracture in amorphous materials including calculator of accurate stress values.
Visualization of Large Scale Models Using the SCIRun Environment SCIRun
Steven Parker, University of Utah
Practicum Year: 1996
Practicum Supervisor: Dr. Chuck Hansen, , Advanced Computing Laboratory, Los Alamos National Laboratory
Visualization of Large Scale Models Using the SCIRun Environment SCIRun, a computational modeling, simulation and visualization environment which I am developing at the University of Utah, was applied to new problems of interest to LANL. I used SCIRun to visualize the vector fields in a large scale simulation of global ocean currents.
Effects of Constraints on Transition Rates in a Model System
James Phillips, University of Illinois at Urbana-Champaign
Practicum Year: 1996
Practicum Supervisor: Dr. Niels Gronbech-Jensen, , Theoretical Division, Los Alamos National Laboratory
To study the possible effects of highly constrained (torsion-angle) dynamics on simulations of proteins, a one dimensional periodic lattice of unit masses linked by harmonic and bistable springs was modeled. In this system, the number of degrees of freedom could be varied while maintaining a constant rigidity. It was found that the transition rate varied monotonically with the number of degrees of freedom in the system. The degree of coupling to the Langevin dynamics heat bath had little effect on the transition rate, but the addition of local degrees of freedom to a constrained system was effective in restoring transition rates.
Object Oriented Software for PDEs using Overlapping Grids and Serial and Parallel Array Classes for Scientific Computation in C++
Scott Stanley, University of California, San Diego
Practicum Year: 1996
Practicum Supervisor: Dr. David Brown, , Scientific Computing, Information and Communicatio, Los Alamos National Laboratory
This project has concentrated on the development of C++ class libraries which can be used in the development of program sin C++ to solve partial differential equations on structured grids in complicated domains. The two main portions of this work are on the two separate libraries, A++/P++ and Overture. A++/P++ is an array class library for C++, while Overture is a set of class libraries for the development of overlapping grid PDE solvers.
The Dynamics of Collapse of a Cavitation Bubble near a Boundary
Gordon Hogenson, University of Washington
Practicum Year: 1995
Practicum Supervisor: Dr. Gary Doolan, , Complex Systems Group, Los Alamos National Laboratory
The goal of my practicum work was to simulate, using the Lattice Boltzmann method, the dynamics of collapse of a nonequilibrium vapor bubble near a solid surface. Such a collapse produces a high pressure jet which impinges upon the surface with high velocity, and the cause of costly 'cavitation damage' to submarine propeller blades. We used a variant of the Lattice Boltzmann method which reproduces a fluid with a non-ideal gas equation of state, and so is capable of reproducing the liquid-vapor phase transition. This study constitutes the first simulation of a collapsing bubble which treats the liquid and gas phases implicitly, as opposed to conventional fluid dynamics, in which the liquid-vapor boundary is treated explicitly via a front-tracking algorithm.
Gyrokinetic Plasma Simulations using Object Oriented ProgrammingGyrokinetic theory
Edward Chao, Princeton University
Practicum Year: 1994
Practicum Supervisor: Dr. John Reynders, , Advanced Computing Laboratory, Los Alamos National Laboratory
Gyrokinetic Plasma Simulations using Object Oriented ProgrammingGyrokinetic theory establishes a basis for analyzing plasma behavior when one is interested in the effects of high wave number perturbations on the plasma but one is not interested in high frequency phenomena. Computer simulations have successfully utilized the theory. However, the complexity of today's plasma simulation codes and the frequency of computer hardware improvements makes the maintenance of these codes difficult. This is the motivation for implementing the object oriented programming paradigm in current gyrokinetic plasma simulations.
Grand Challenge Molecular Dynamics Researchers in T-11 (Statistical Physics and Condensed Matter Theory)
Timothy Germann, Harvard University
Practicum Year: 1994
Practicum Supervisor: Dr. Richard LeSar, , Theoretical Division/Center for Materials Science, Los Alamos National Laboratory
Grand Challenge Molecular Dynamics Researchers in T-11 (Statistical Physics and Condensed Matter Theory) have developed a massively parallel molecular dynamics code capable of simulating up to 600 million atoms. (Computers in Physics, Jul/Aug 1993 cover and pp. 382-3) This can be used to model materials, e.g. fracture dynamics.
Topological Characterization of the Strange Attractor of Low-Dimensional Chaotic Systems.
Pete Wyckoff, Massachusetts Institute of Technology
Practicum Year: 1994
Practicum Supervisor: Dr. Nick Tufillaro, , Center for Nonlinear Studies, Los Alamos National Laboratory
Consists of searching for various invariants in the system and relating them to parameter changes.
Composable Finite Difference Algorithms for Vector Operators
Mark DiBattista, Columbia University
Practicum Year: 1993
Practicum Supervisor: Dr. Mac Hyman, , Center for Nonlinear Studies, Los Alamos National Laboratory
Finite difference approximants to vector operators grad, curl, and div generally lose desirable 'mimetic' properties when composed to create higher order operators. Borrowing some ideas from the more sophisticated finite volume theory, a consistent set of algorithms is devised which temporarily maps values to grid elements and faces, and when composed, preserve those previously lost properties.
The Numerical Tokamak Grand Challenge
William Humphrey, University of Illinois at Urbana-Champaign
Practicum Year: 1993
Practicum Supervisor: Dr. John Reynders, , Advanced Computing Laboratory, Los Alamos National Laboratory
The Numerical Tokamak Grand Challenge Designed and implemented an object-oriented particle-in-cell class library which ran on a variety of distributed platforms.
Long-Time Models for Ocean Circulation Oceans
David Ropp, University of Arizona
Practicum Year: 1993
Practicum Supervisor: Dr. Mac Hyman, , Center for Nonlinear Studies, Los Alamos National Laboratory
Long-Time Models for Ocean Circulation Oceans exhibit behavior on a wide range of both spatial and temporal scales. New models of ocean circulation have focused on capturing the long-time dynamics of the system, while also resolving the spatial scales containing much of the system's energy. The goal is to have a model that accurately gives the general circulation patterns that could be incorporated into global climate modelsor be used to start up more detailed models.
Spectral Shallow Water Equations Modelling: Comparisons between Serial and Parallel Computing
Eric Williford, Florida State University
Practicum Year: 1993
Practicum Supervisor: Dr. Robert Malone, , Advanced Computing Laboratory, Los Alamos National Laboratory
A spectral shallow water model was tested on various machines, including LANL's CM-5. The serial version of the code was adapted to the parallel environment.