Lawrence Livermore National Laboratory


Adding a Wall Model to MIRANDA's Turbulence Library
Bryn Barker, University of North Carolina at Chapel Hill
Practicum Year: 2023
Practicum Supervisor: Britton Olson, Computational Physicist, Weapons and Complex Integration, Lawrence Livermore National Laboratory
The main difficulty in turbulence modeling is capturing the effects of the sub-grid scale stresses on the overall flow. This is especially challenging with turbulent flow past a wall as typical LES models do not capture the wall stresses. For my project, I implemented a wall stress model into a LLNL code to handle this type of flow.
Accelerating Computations with Radial Basis Functions in MFEM with Tensor Decompositions
Lucy Brown, Stanford University
Practicum Year: 2023
Practicum Supervisor: Brody Bassett, Computational Physicist, Design Physics Division, Lawrence Livermore National Laboratory
In this project, we investigate the viability of adding tensor-decomposed radial basis functions in the MFEM code.
Could recent cooling in the East Pacific be due to natural variability?
Zachary Espinosa, University of Washington
Practicum Year: 2023
Practicum Supervisor: Mark Zelinka, Staff Scientist, Program for Climate Model Diagnosis & Intercompari, Lawrence Livermore National Laboratory
In recent decades, observed sea surface temperature trends have been characterized by a cooling in the eastern tropical pacific and southern ocean, and warming in the western tropical pacific. In historical simulations, global climate models fail to reproduce this pattern of warming. Recent studies have used large ensembles to show that the discrepancy between modeled and observed historical SST trends is very unlikely to be due to internal variability. Using a multimodal intercomparison and surface energy balance decomposition, we show that the magnitude of the shortwave-cloud feedback (SWCF) in the tropical southeast pacific ocean, to first-order, 1) explains intermodel spread in the magnitude of southeast pacific multi-decadal SST variability and 2) the strength of southern ocean and tropical pacific coupling. We estimate that even when accounting for model bias in the SWCF, the likelihood of internal variability being the primary driver of recent east pacific cooling is larger than most models predict.
A physics-based model of permanent set in solid network polymeric materials
Joshua Fernandes, University of California, Berkeley
Practicum Year: 2023
Practicum Supervisor: Mike Puso, Engineer, , Lawrence Livermore National Laboratory
For my project, I was investigating permanent set, or long-lasting deformation after prolonged loading, in rubber-like polymeric materials. In particular, I was interested in developing a physics-based model that could capture permanent set using microscopic details of network polymers. This model ended up becoming a viscoelastic model, in which we used a hyperelastic material model to capture large deformations. The theory of the model was general to any underlying hyperelastic material model, and we implemented it in particular for the neo-Hookean model as a test case. We formalized the spatiotemporal discretization of the model for use within finite element libraries. I added the resulting material model within the Shared Material Library (SML) maintained at LLNL and tested it in simple model problems on Livermore Computing (LC) resources.
Towards a RANS turbulence capability in Marbl for shock-driven Richtmyer-Meshkov instabilities
Victor Zendejas Lopez, California Institute of Technology
Practicum Year: 2023
Practicum Supervisor: Robert Rieben, Staff Scientist/Project Lead, WCI, Design Physics Division, WSC Program, Lawrence Livermore National Laboratory
The long term goal of my project is to add an additional Reynolds Averaged Navier-Stokes (RANS) closure model to MARBL (a high order finite element, Arbitrary Lagrangian-Eulerian code). We care a lot about performing RANS calculations because engineers and scientists want to use tools such as MARBL for quick iterative designs of inertial confinement fusion (ICF) capsules to reduce the growth of hydrodynamic instabilities. These instabilities act as an energy sink and take energy out of ignition, thereby decreasing the overall yield. Attempting to understand how these instabilities develop and grow by fully resolving all turbulence length and time scales is prohibitively expensive and hinders the efficient exploration of ICF capsule designs. However, in order to validate the RANS model, several high-fidelity 3D turbulent simulations of the Richtmyer-Meshkov instability need to be conducted. This was the primary focus of my practicum this past summer.
Developing Algorithms for Thermal-Mechanical Time Integration
Justin Porter, Rice University
Practicum Year: 2022
Practicum Supervisor: Michael Puso, Computational Physicist, Computational Engineering Division, Lawrence Livermore National Laboratory
I worked on implementing a time integration algorithm that exactly conserves energy for thermal-mechanical systems. The new algorithm can be used to more accurately simulate long term behavior of structures.
Data-driven approaches for interpretable NMR simulations: Probing local disorder in native oxide films on Ti
Kyle Bushick, University of Michigan
Practicum Year: 2021
Practicum Supervisor: Brandon Wood, Staff Scientist, Quantum Simulation Group - Materials Science, Lawrence Livermore National Laboratory
In this project, I generated a set of NMR spectra for nearly 40,000 atoms from over 400 amorphous TiO2 structures. This data was then used to explore a variety of different machine learning architectures - with the most focus given to graph neural networks - for predicting the atomic level NMR spectra based solely on the local atomic environment around the atom.
Automated Computational Steering Using Flexible In-Situ Triggers
Margaret Lawson, University of Illinois at Urbana-Champaign
Practicum Year: 2021
Practicum Supervisor: Matt Larsen, Scientific software developer , Computing, Lawrence Livermore National Laboratory
In this project I worked with Ascent, an in-situ analysis and visualization library for HPC simulations. I extended Ascent's query expressions system to support a broader range of in-situ triggers. Working with my advisors, we demonstrated how this added functionality can be used to produce significant savings, in terms of core hours and human resources, for scientists working with complex simulation codes.
Development and benchmarking of a massively parallel phase-field model for the study of diffusionless phase transitions in polycrystalline materials
Guy Moore, University of California, Berkeley
Practicum Year: 2021
Practicum Supervisor: Tae Wook Heo, Staff Scientist, Materials Science Division, Lawrence Livermore National Laboratory
During this practicum experience, and under the guidance of my practicum advisor, Dr. Tae Wook Heo, I implemented a massively parallel phase-field code within Python, with an additional C++ interface that is currently in development. This phase-field code can be used to study diffusionless structural phase transitions (PTs) in solids, based on the numerical and theoretical framework developed by Drs. Tae Wook Heo, my practicum research mentor, and others. This framework couples structural order parameters to strain based on microelasticity theory. The methodology can be extended to a wide range of diffusionless, or martensitic, structural phase transitions. In this study, we tested the framework by studying the transformation kinetics and microstructure of the following structural phase transformations: face-centered-cubic (FCC) to body-centered-cubic (BCC), hexagonal-close-packed (HCP) to FCC, and FCC to monoclinic.
Moment Methods for Thermal Radiative Transfer
Samuel Olivier, University of California, Berkeley
Practicum Year: 2021
Practicum Supervisor: Terry Haut, Staff Scientist, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Moment methods are an efficient scheme for solving the radiation transport equation, a crucial component of modeling Inertial Confinement Fusion (ICF). In such methods, the high-dimensional radiation transport equation is coupled to a PDE formed from the angular moments of the transport equation. These moment equations are lower-dimensional and can be directly coupled to other multiphysics components. The primary research difficulty is the development of discretizations for the moment equations that are amenable to efficient implementation on high performance computers. In my practicum, I developed discretizations for the Variable Eddington Factor (VEF) method and the so-called Second Moment method (SMM).
Variable Eddington Factor Method for Thermal Radiative Transfer
Samuel Olivier, University of California, Berkeley
Practicum Year: 2020
Practicum Supervisor: Terry Haut, , Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
I investigated solution techniques for the linear system generated in the discretization of the Variable Eddington Factor (VEF) equations. The solution of this linear system is required in the inner most loop of a thermal radiative transfer (TRT) solve and is thus crucial to performance. Due to the difficulty inherent to the VEF equations and their discretization, many traditional techniques failed to be effective. This investigation pinpointed the key issues preventing the solution techniques from being effective. In addition, a solution technique was found that is effective for first and second order finite element discretizations. This solver satisfies the near-term goals of the LLNL project I was apart of and has the potential to be an important algorithm for next-gen TRT codes at LLNL.
Modeling physical origins of dephasing in open qubit systems
Dipti Jasrasaria, University of California, Berkeley
Practicum Year: 2019
Practicum Supervisor: Vincenzo Lordi, Group Leader, Materials Science Division, Lawrence Livermore National Laboratory
One of the main challenges facing quantum computers is that, due to environmental noise, qubit states lose information before they can be reasonably manipulated and measured. Using the Lindblad formalism, I developed a model that explicitly describes the two-level system (TLS) defect states that couple to the qubit and lead to its eventual decoherence. Simulations can be used to understand the physical origins of noise that cause T1 and T2 decoherence processes in superconducting qubits. This work concludes that the distributions of TLS energies, dipoles, and decay rates and of qubit electric fields dictate decoherence timescales.
Non-equilibrium properties of complex metal hydrides from molecular dynamics simulations
Jonas Kaufman, University of California, Santa Barbara
Practicum Year: 2019
Practicum Supervisor: Brandon Wood, Staff Scientist, Materials Science, Lawrence Livermore National Laboratory
The project focused mainly on studying the surface properties of complex metal hydrides using large-scale ab initio molecular dynamics (AIMD) simulations. Surface dynamics, among other nonequilibrium effects, play an important role in the behavior of nanoscale hydrides currently being investigated as solid-state hydrogen storage materials. The ultimate goal is to use AIMD results within a thermodynamic model to identify the surface entropy and enthalpy contributions for a wide range of hydride compounds. A secondary project examined the effects of disorder and volume expansion on hydrogen transport in a specific hydride system (Na-Al-H), as these conditions may approximate the material at grain boundaries or interfaces.
Collaborative Autonomy for Space Situational Awareness
Julia Ebert, Harvard University
Practicum Year: 2018
Practicum Supervisor: Michael Schneider, Research Scientist, Physics, Lawrence Livermore National Laboratory
Tracking satellites is an important component of space situational awareness (SSA). However, current ground-based tracking approaches rely on centralized detection and require hours to accurately estimate an orbit. A constellation of low-cost, autonomous cube satellites could provide a fast and robustly decentralized architecture for SSA. We propose distributed particles filters as a method to iteratively refine orbit estimates with low communication bandwidth. We demonstrate the feasibility of this approach by implementing our algorithm in simulation. This simulator can also be used to evaluate the parameter space for future satellite constellation design, as well as test the system's robustness to failures.
Robust anomaly detection with GANs
Jenelle Feather, Massachusetts Institute of Technology
Practicum Year: 2018
Practicum Supervisor: Jayaraman J. Thiagarajan, Computer Scientist, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Generative adversarial networks (GANs) are a class of models where two neural networks compete against each other. The job of the first is to create realistic looking data (ie images, time series, etc) by capturing the distribution of a training set of data. The second neural network is trained to determine whether an input that it receives is real (coming from the training set) or fake (coming from the generator network). Recent work has shown that by training these two networks simultaneously, one is able to create realistic looking images. A common type of problem that people have been applying machine learning methods towards is anomaly detection. Here, one receives a bunch of data, and the goal is to tell whether a given sample is "anomalous" from the true distribution (one classic case of this is with spam messages in email, as these fall outside of the normal space of emails). During my practicum we attempted to use GANs to create better anomaly detection models. Previous work has suggested that the discriminator in a GAN could be used to detect anomalies, as it has been trained to determine real data from fake data. However, in practice these methods don't work particularly well. We explored ways of improving this GAN based anomaly detection method, specifically by considering the case where you could have multiple generators whose goal is to produce anomalous samples in addition to the real samples, such that the discriminator boundary would be tightened. Another way might be to have the discriminator itself be imperfect, such that it does a better job at capturing the features in anomalous samples and therefore a metric could be more finely tuned between the normal and anomalous samples.
Using high-performance computing to build a better model of self-organized snow
Kelly Kochanski, University of Colorado
Practicum Year: 2018
Practicum Supervisor: Barry Rountree, Computer Scientist, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
This summer, I worked with the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory. I brought in a scientific model, ReSCAL, which I have used in my thesis research to model the growth of snow dunes. I worked with my mentor Barry Rountree to optimize this model for use on LLNL supercomputers. I also served as scientific mentor to five undergraduate and recent graduate interns, who worked on several projects related to ReSCAL.
Four Dimensional Continuum Kinetic Modeling of Magnetized Plasmas
Ian Ochs, Princeton University
Practicum Year: 2018
Practicum Supervisor: Dick Berger, Research scientist, WCI, Lawrence Livermore National Laboratory
At Livermore, I worked with Dr. Richard Berger to develop kinetic simulations of magnetized plasmas. I adapted the group’s code, LOKI, to include a constant background magnetic field, specified by the user. This allows the code to examine two-dimensional physics problem in magnetized plasmas, including the propagation of magnetized waves in regions of steep density gradients. These simulations should provide insight into the processes by which charge can be transported perpendicular to magnetic field lines in fusion plasmas.
A Rayleigh Quotient Method for Criticality Eigenvalue Problems in Neutron Transport
Mario Ortega, University of California, Berkeley
Practicum Year: 2018
Practicum Supervisor: Teresa S. Bailey, Deterministic Transport Project Lead, Advanced Simulation and Computing, Lawrence Livermore National Laboratory
We continued study and development of a new fixed point method to determine the eigenvector and eigenvalue that describes the asymptotic in time behavior of neutron flux in a nuclear system. The sign of the eigenvalue determines whether or not the neutron flux decreases/increases/remains steady in time. Previous methods used to calculate the eigenvalues suffered from possible instabilities related to the insertion of "negative absorption," an description of the fact that the eigenvalue could be negative when all physical parameters of the problem are positive. This negative source can cause iterative schemes to return non-positive neutron flux results, an impossible result. The fixed point developed prevents the possibility of this occurring.
Higher-Order Advection-Based Remap of Magnetic Fields in an Arbitrary Lagrangian-Eulerian Code
Brian Cornille, University of Wisconsin-Madison
Practicum Year: 2017
Practicum Supervisor: Dan White, Engineer, Computational Engineering Division, Lawrence Livermore National Laboratory
We present methods formulated for the Eulerian advection stage of an arbitrary Lagrangian-Eulerian code for the new addition of magnetohydrodynamic (MHD) effects. The various physical fields are advanced in time using a Lagrangian formulation of the system. When this Lagrangian motion produces substantial distortion of the mesh, it can be difficult or impossible to progress the simulation forward. This is overcome by relaxation of the mesh while the physical fields are frozen. The code has already successfully been extended to include evolution of magnetic field diffusion during the Lagrangian motion stage. This magnetic field is discretized using an H(div) compatible finite element basis. The advantage of this basis is that the divergence-free constraint of magnetic fields is maintained exactly during the Lagrangian motion evolution. Our goal is to preserve this property during Eulerian advection as well. We will demonstrate this property and the importance of MHD effects in several numerical experiments. In pulsed-power experiments magnetic fields may be imposed or spontaneously generated. When these magnetic fields are present, the evolution of the experiment may differ from a comparable configuration without magnetic fields.
DRL4ALE: DRL Prediction of Expert Mesh Relaxation in ALE
Noah Mandell, Princeton University
Practicum Year: 2017
Practicum Supervisor: Ming Jiang, Computer Scientist, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Arbitrary-Lagrangian-Eulerian (ALE) methods are used in many of the LLNL multi-physics codes. In this method, the mesh can follow the fluid (Lagrangian), or the mesh can relax so that the fluid flows through the mesh (Eulerian). How to decide where and how much to relax the mesh to ensure simulations are both stable and have good physics is a significant challenge for users. The goal of the DRL4ALE project is to use reinforcement learning to solve and automate this mesh relaxation problem.
Algebraic Multigrid Preconditioners for High-order Finite Element solvers
Thomas Anderson, California Institute of Technology
Practicum Year: 2016
Practicum Supervisor: Tzanio Kolev, Computational Mathematician, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
The project focused on developing preconditioners which allowed for the efficient solution of high-order finite element solvers. Future DOE compute facilities will rely on enormous computational resource but limited, perhaps slower, memory capability. High-order finite elements, for sufficiently regular problems, yield a more efficient solution on such architectures in that there are more flops per byte of memory access. The challenge is in designing efficient solvers, especially preconditioners, to solve these higher order problems. We chose an approach centered on existing frameworks that LLNL uses: the algebraic solver techniques in Hypre and the finite element discretizations available in MFEM. Our work involved discretizations of both similar operators and identical operators on special auxiliary meshes, with the aim of achieving a sparse approximation.
Detection of CRISPR genetic engineering in pathogen genomes
Hannah De Jong, Stanford University
Practicum Year: 2016
Practicum Supervisor: Tom Slezak, Associate Program Leader, Bioinformatics, Global Security Program, Lawrence Livermore National Laboratory
Sequencing-based biosurveillance has the potential to identify pathogens in populated areas before they become widespread. However, future bioterrorism agents could include pathogens that have been engineered by technologies like CRISPR/Cas9, and would not necessarily be detected by existing surveillance systems. The goal of this project was to utilize genomic signatures left behind by CRISPR/Cas9 for detection of CRISPR/Cas9 engineering in pathogen genomes.
Energy conserving, linear scaling, real space first principles molecular dynamics.
Ian Dunn, Columbia University
Practicum Year: 2016
Practicum Supervisor: Jean-Luc Fattebert, Computer Scientist, Project Leader, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
In first principles molecular dynamics (FPMD) simulations, the electrons are modeled using quantum mechanics. This quantum mechanical model requires the solution of the Kohn-Sham equations at each time step in the simulation. To maintain energy conservation over many time steps, traditional FPMD methods require costly tightly-converged Kohn-Sham solutions. It has been demonstrated that one can obtain energy-conserving molecular dynamics without requiring tightly-converged Kohn-Sham solutions through the use of extended-Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods. For my practicum, I adapted and extended the theory of XLBOMD for use with the linear-scaling real space FPMD methods developed at LLNL. We are currently running simulations to quantify the speedup obtained by using XLBOMD.
Reproducing Kernel Method Extensions to ASPH
Nicholas Frontiere, University of Chicago
Practicum Year: 2014
Practicum Supervisor: John Michael Owen, S&E MTS 4, AX, Lawrence Livermore National Laboratory
We investigated the application of reproducing kernel method extensions to ASPH (Adaptive Smoothed Particle Hydrodynamics) [1, 2] as a method of modeling compressible fluid dynamics. Reproducing kernel methods [3, 4, 5] address major shortcomings in traditional SPH methods, allowing exact interpolation of fields across the interpolation points, regardless of how disordered the points become, or the presence of boundaries. Our goal was to test how well these higher order extensions of the traditional SPH approach model complex compressible fluid flows, including shear layers and strong shock scenarios. 1. Owen, Villumsen, Shapiro, & Martel, ApJS; v 116, pp 155--209 (1998) 2. Owen, Proceedings of the Fifth International SPHERIC Workshop, Manchester, U.K. (2010) 3. Jun, Liu, & Belytschko, Int. J. for Numer. Meth. Engng.; v 41; pp 137--166 (1998) 4. Bonet & Kulasegaram, Int. J. Numer. Meth. Engng.; v 47; pp 1189--1214 (2000) 5. Bonet et. al., Comput. Methods Appl. Mech. Engrg.; v 193; pp 1245--1256 (2004)
Cardioid - a high-resolution computational model of the deforming heart
Omar Hafez, University of California, Davis
Practicum Year: 2014
Practicum Supervisor: David Richards, Physicist, CMMD, Lawrence Livermore National Laboratory
Cardioid is a highly efficient and scalable code that models the electrophysiology and mechanics of the human heart for unprecedented exploration of the mechanisms of sudden cardiac arrest from arrhythmia.
Optimizing memory traffic in pF3D
Eileen Martin, Stanford University
Practicum Year: 2014
Practicum Supervisor: Steve Langer, Physicist, AX, Lawrence Livermore National Laboratory
Memory bandwidth limitations are a growing problem affecting pF3D, a large-scale multi-physics code simulating laster-plasma interactions. Implementing lossy hardware compression between DRAM and cache could improve bandwidth use in the future. pF3D has been shown to be resilliant to errors introduced by lossy compression during simulation when we compare the resulting physically meaningful quantities. I explored strategies to optimize the code if this hardware change were to be implemented, and predicted approximately how fast hardware compression would need to be on a BG/Q like architecture to benefit from such a change.
Analysis of energy transfer between spatial scales in a turbulent, coaxial jet
Daniel Rey, University of California, San Diego
Practicum Year: 2014
Practicum Supervisor: Gregory Burton, , Computational Engineering Division, Lawrence Livermore National Laboratory
For my practicum project, I analyzed data from the largest ever (more than 1 billion grid points) direct numerical simulation of high Schmidt number mixing (mixing of weakly diffusive scalars) in a turbulent coaxial jet. The simulation used the nonlinear large eddy simulation, developed by my practicum supervisor, to model the smallest scale features, which could not be resolved by the simulation. The goal was to understand how the jet structure transports conserved quantities like energy, momentum and scalar concentration between the length scales involved in the simulation. In particular, empirical evidence suggests the existence of uniformly distributed domains of characteristic length scale, which correspond to forward (large to small) and backward (small to large) transport of energy and scalar concentration. One of the main goals was to investigate the source of these domains, to enhance our understanding of the complex flow structures generated by turbulent mixing.
Laser-Induced Plate Buckling
Omar Hafez, University of California, Davis
Practicum Year: 2013
Practicum Supervisor: Dr. James S. Stolken, Staff Scientist, Computational Engineering Division, Lawrence Livermore National Laboratory
Experiments showed that the buckling direction of a clamped aluminum plate exposed to a laser beam on one side depended on the relative size of the laser beam to that of the plate. It was not clear that finite element simulations were correctly modeling the problem. A reduced order model was developed to reduce the relevant parameter set and better analyze the mechanisms behind this thermal buckling phenomenon.
High-density additively manufactured parts using integrated data mining and simulation
Aaron Sisto, Stanford University
Practicum Year: 2013
Practicum Supervisor: Chandrika Kamath, , Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
The emerging technology of additive manufacturing with metals has the potential to completely disrupt conventional design and fabrication processes across virtually every industry. The additive manufacturing process involves producing metal parts layer-by-layer, using a high-energy laser beam to fuse the metallic powder particles. The novelty of this process lies in the ability to computationally design parts with unique properties and shapes and then upload the design to machines where the part is produced autonomously. The unique degree of control and precision in this selective laser melting-additive manufacturing process allows for mass production of new and existing metal parts, with quality approaching that of wrought alloys. Compared to traditional manufacturing techniques, additive manufacturing presents a revolutionary opportunity to drastically reduce manufacturing time, cost and energy consumption. The primary obstacle in widespread adoption of additive manufacturing is the development of reliable process parameters to ensure robustness and reduce certification time of manufactured materials. However, predictive modeling has become an invaluable tool used to refine the fabrication process and product quality. By integrating simulation and data mining techniques with in situ characterization and process control, optimal process parameters and material properties can be achieved, ensuring robustness and reliability of metal parts over a wide range of application areas. The primary focus of my practicum was to develop new methods of data mining to examine data taken directly from the additive manufacturing process and identify key process parameters influencing the density of metal parts - a crucial material property that has been especially difficult to control. This objective is particularly difficult as the correlation between process parameters and the part density is often highly nonlinear, with factors such as the high dimensionality of the parameter space and noisy data acquisition introducing additional complexity. Using new data mining techniques to reliably identify the process parameters most important in modulating the part density. From this analysis, and using predictive modeling to understand the microscopic details of the laser-melting process, a new set of guidelines was developed to ensure high density (>99%) in additively manufactured metal parts using 316L stainless steel.
Rank Reduction Methods in Nuclear Structure Theory
Robert Parrish, Georgia Institute of Technology
Practicum Year: 2012
Practicum Supervisor: Nicolas Schunck, Research Staff, Nuclear Theory and Modeling Group, Lawrence Livermore National Laboratory
This practicum involved the crossing of tensor decomposition techniques from electronic structure problems in the field of chemical physics to nucleonic structure problems in the field of nuclear physics. In particular, I extended the recently-developed Tensor HyperContraction (THC) ansatz of chemical physics for application in nuclear density functional computations. This resulted in two disparate and surprising results. In nuclear physics, the newly-developed X-THC representation provides a lossless compression scheme of the potential integral tensor, allowing for the tractable utilization of arbitrary local potentials for the first time. In chemical physics, understanding of this X-THC representation provides a formal physical justification for the previously-phenomenological approximate THC variants in molecular basis sets. These developments have opened a number of very interesting research directions in both fields.
Constant Pressure Quantum Monte Carlo for the Study of Hydrogen at High Pressures
Brenda Rubenstein, Columbia University
Practicum Year: 2012
Practicum Supervisor: Berni Alder, Consultant and Lawrence Livermore, Quantum Simulations Group, Lawrence Livermore National Laboratory
Over the course of this practicum, I finished developing a new fully quantum constant-pressure Quantum Monte Carlo algorithm useful for the study of high pressure quantum solids. Hydrogen at high pressures is predicted to form a number of exotic phases characterized by unit cells with complex translational and rotational ordering and related complex physics. Precisely what these phases are and where they are located on the hydrogen phase diagram, however, remains an open question. Experiments aiming to determine hydrogen's phase diagram are severely challenged by hydrogen's highly quantum character as well as by the practical difficulty of measuring anything at several gigapascals. The task of delineating the hydrogen phase diagram has therefore been left to theory. The first theoretical predictions of the hydrogen phase diagram at high pressures have been made using density functional theory. However, because density functional theory is inexact by definition and can yield wildly different answers depending upon the functional and corrections used, a more definitive technique is needed. During my practicum, I developed and tested the first, in principle, exact technique that allows one to determine the hydrogen phase diagram free of any previous information about hydrogen's structure.
Design of numerical linear algebra kernels for QBox (DFT)
Edgar Solomonik, University of California, Berkeley
Practicum Year: 2012
Practicum Supervisor: Todd Gamblin, Dr, CASC, Lawrence Livermore National Laboratory
I designed and integrated a parallel matrix multiplication algorithm into QBox, a density functional theory (DFT) application for computing the electronic structure of systems. My algorithm utilized topology-aware mapping that was coupled closely to the architecture of the BlueGene/Q architecture. The design also makes run-time decisions to minimize communication, via replication and algorithmic decisions. In particular, we built on previous work by considering rectangular matrices, which are operated on by QBox. This algorithm has been integrated in the application in the form of an external library, achieves Petaflop performance on the Sequoia architecture and is currently being used for production science calculations. We are currently attempting to extend these methods to architect a parallel communication-avoiding symmetric eigenvalue/eigenvector solver, which is also necessary for DFT calculations. Towards the end of the fellowship, we came up with a new parallel algorithm for tall-skinny QR (a key decomposition for a symmetric eigensolver) but it has proven to be unstable. We are continuing work on this subject.
Molecular Dynamics concerning crystal growth in non-equilibrium flows including shear flow.
Mary Benage, Georgia Institute of Technology
Practicum Year: 2011
Practicum Supervisor: James Belak, Physicist, Condensed Matter and Materials Division, Lawrence Livermore National Laboratory
The project goal was to look at crystal growth in silicas. We planned on using a shear flow conditions and calculate fluid properties and how they change with increasing crystal volume fraction. Specifically we wanted to see if/when the fluid changes from Newtonian to non-Newtonian. This is important for better understanding fluid properties in magma chambers
Modeling Mixing in Rayleigh-Taylor Instabilities
Sanjeeb Bose, Stanford University
Practicum Year: 2011
Practicum Supervisor: Oleg Schilling, Physicist, AX, Lawrence Livermore National Laboratory
The Rayleigh-Taylor (RT) instability has been identified as one of the critical issues in the design of inertial confinement fusion (ICF) targets for use at the National Ignition Facility (NIF). Prediction of the multi-fluid mixing in the targets is necessary to help design optimal targets. Actual engineering design calculations are unable to afford the computational expense of direct numerical simulations, and as a result, models must be provided to describe the mixing at the unresolved scales. The practicum work developed a three dimensional code to evaluate the accuracy of models of mixing in the RT instability.
Plasma Transport Coefficients for Molecular Dynamics Simulations
Amanda Randles, Harvard University
Practicum Year: 2011
Practicum Supervisor: David Richards, , ISCR, Lawrence Livermore National Laboratory
I implemented a method for calculating the thermal conductivity of high density hydrogen plasmas in the ddcMD Molecular Dynamics package from Lawrence Livermore Labs. We drew from the quantum mechanics methods and applied them in an MD regime.
A Constant Pressure Quantum Algorithm for Exploring the Hydrogen Phase Diagram at High Pressures
Brenda Rubenstein, Columbia University
Practicum Year: 2011
Practicum Supervisor: Berni Alder, Consultant and Lawrence Livermore, Quantum Simulations Group, Lawrence Livermore National Laboratory
A Modified Treatment of Sources in Implicit Monte Carlo Radiation Transport
Travis Trahan, University of Michigan
Practicum Year: 2011
Practicum Supervisor: Nicholas A. Gentile, Physicist, AX Division, WCI, Lawrence Livermore National Laboratory
The Implicit Monte Carlo (IMC) method is a widely used method for simulating thermal radiation transport. Like all Monte Carlo methods, it relies on pseudo-random number generators to determine the outcome of various physical events (photon scattering, photon emission, etc.). Because of those, statistical noise is inherent to any Monte Carlo solution. In the case of IMC, the noise in a 0-dimensional, frequency-independent problem is entirely due to the random sampling of photon emission times as they are emitted from the hot material. We have developed an analytic was of treating the photon emission time that does not rely on random sampling. As a result, the noise due to emission time sampling can be eliminated. For more difficult problems, noise exists due to the random sampling of other variables. However, the new treatment may reduce the total noise in the solution for certain problems.
A coupled Maxwell-Schrodinger code to simulate the optical properties of nanostructures
Ying Hu, Rice University
Practicum Year: 2010
Practicum Supervisor: Daniel White and Tiziana Bond, Computational Engineering Group Leader (Dan), Engineering Technologies Division, Lawrence Livermore National Laboratory
In this project, we prototyped a 1D code that partially couples the electromagnetic simulation with a quantum mechanics model. The main goal is to simulate nanostructures without using the dielectric function of the bulk material, as quantum effects are not characterized by bulk material properties. We obtained electromagnetic potentials from the Maxwell equations and fed them to the Schrodinger equation to calculate the wave function. We then computed the quantum current from the wave function and substitute this current back into the Maxwell's equations as the source term. We iterated the entire process using the same time step and spatial grid points. We implemented the code in a 1D model where an electron is confined with the presence of an incident field. We computed the polarizability of the electron.
Hydrodynamic analysis of LIFE (Laser Inertial Fusion Engine)
Britton Olson, Stanford University
Practicum Year: 2010
Practicum Supervisor: Andrew Cook, Physicist, AX, Lawrence Livermore National Laboratory
Played a key role in the analysis of initial and developing designs for the LIFE project. LIFE will follow the NIF campaign at LLNL to exploit this ICF technology and develop a power system around it. The development stage is very early on and the LIFE team, which is comprised of experts from many fields, is seeking to develop tools and intuition that will aid as a transition from design to manufacturing occurs.
Uncertainty Quantification Strategic Initiative (UQSI): A pipeline for model ensemble selection.
Hayes Stripling, Texas A&M University
Practicum Year: 2010
Practicum Supervisor: Gardar Johannesson, Applied Statistician, A/X, Lawrence Livermore National Laboratory
I was a part of a lab-wide initiative to develop a "UQ Pipeline" for massive computational models. The overall goal of this pipeline is to automatically guide model-input sampling to improve the accuracy of predictive models. The UQSI is using a large-scale climate model, the Community Atmosphere Model (CAM), as a test-bed for the pipeline project. The CAM model takes hundreds of inputs and produces hundreds of time and space dependent output parameters, such as surface temperature and energy flux in the atmosphere. As one might expect, many of the inputs to the CAM model are highly uncertain, such as cloud and atmosphere parameters. The pipeline is designed to use results from previous runs of the model as well as observed and measured experimental data (such as satellite readings) to optimally choose the next set of these uncertain inputs such that the predictive capability of the CAM model will be improved. I was involved in the portion of the project that attempts to fuse observed or measured data with previous model outputs.
Looking for cats in bosonic flows
Gregory Crosswhite, University of Washington
Practicum Year: 2009
Practicum Supervisor: Jonathan DuBois, Dr., H-Division, Quantum Simulation Group, Lawrence Livermore National Laboratory
This summer I investigated the properties of rotating gases of bosons using the variational path integral Monte Carlo approach. One important objective of this project was to examine the conditions under which a coherent superposition of macroscopic flow states --- a so-called "cat" state in reference to Schrodinger's cat --- could be induced in such a system.
Ab-initio simulation of heavy-metal azide detonations under shocks
Alejandro Rodriguez, Massachusetts Institute of Technology
Practicum Year: 2008
Practicum Supervisor: Evan J. Reed, E. O. Lawrence Fellow, Materials Research, Lawrence Livermore National Laboratory
The goal of our project is to elucidate the various chemical and/or thermal detonation mechanisms in the yet theoretically unexplored heavy metal azides. Heavy metal azides are primary explosives that are often used to initiate detonation reactions in important applications such as mining. They can also be found, for example, in air bags. Unfortunately, primary explosives are difficult to understand experimentally due to their volatility, and theoretical calculations have never been performed due to the short timescales under which the azides detonate. Using a multi-scale ab-initio technique (recently developed by Evan Reed and collaborators), we hope to access the dynamical aspects of these explosives under shock conditions. The simulations should shed light into the various mechanisms involved during detonation, and ultimately reveal why these azides are so sensitive and why they release so much energy.
Ring opening dynamics of the sliding DNA clamp PCNA
Joshua Adelman, University of California, Berkeley
Practicum Year: 2007
Practicum Supervisor: Daniel Barsky, Permanent staff scientist (Computational Biochemis, Biology and Biotechnology Division; CMLS, Lawrence Livermore National Laboratory
PCNA acts as a molecular "tool belt" that enables DNA polymerase processivity during DNA replication and repair, while allowing accessory proteins access to the DNA in a regulated manner during a myriad of post-replicative processes. The molecular architecture of PCNA, like all sliding clamps, consists of six domains that form a ring that can accommodate the threading of duplex DNA through its central channel. Although all sliding clamps are stable in a closed planar ring conformation, in order to load onto DNA in a site-specific manner, the ring must disrupt a stable subunit-subunit interface to pass DNA into the central channel. An ATP-fueled motor, replication factor C (RFC) opens the ring and places it at a double-stranded/single stranded junction. The mechanistic details of this process are still poorly understood. We have investigated ring opening using equilibrium and non-equilibrium molecular dynamics simulations. Removal of one subunit relaxes the closure constraint on the ring and allows for fluctuations at the dimer interface. Equilibrium simulations demonstrate that the dimer can relax into conformations consistent with both right- and left-handed spirals. To further investigate the energetics of ring opening, we have performed biased metadynamics simulations of the complete trimer to fully map out the free energy landscape governing this process. By applying a history-dependent biasing potential that deters the protein from revisiting previously sampled conformation, we effectively sample the ring opening coordinates and are able to extract the associated free-energy. The results of these simulations will allow us to predict the conformation of the open ring and trace out the mechanism of ring opening.
Identifying Genome Sequencing Problems Using Active Learning
Jeremy Lewi, Georgia Institute of Technology
Practicum Year: 2007
Practicum Supervisor: Mingkun Li, , Joint Genome Institute, Lawrence Livermore National Laboratory
The Joint Genome Institute (JGI) had developed a naïve Bayes classifier to identify problems in the Sanger sequencing production line. Sanger sequencing produces short segments of fluorescently labeled DNA segments. The DNA sequence is found by measuring the fluorescence as the DNA passes by a detector. Problems in the production line often result in distinct signatures in the intensity signals measured by the detector. As a result, the JGI would like to develop methods for identifying specific production problems from the intensity signals. The JGI currently trains a naïve Bayes classifier on features extracted from the fluorescence signals. Training classifiers requires a set of labeled data; that is a set of signals labeled with the production problems that are evident in each trace. Generating labeled data is expensive and time consuming. Only a small fraction of the data falls into the various classes that we would like to build classifiers for. As a result, building a training set generally requires an expert to manually cull through a large database in order to find traces belonging to each class we want to identify.
Particle Interactions in DNA-laden Flows
Michael Bybee, University of Illinois at Urbana-Champaign
Practicum Year: 2005
Practicum Supervisor: David Trebotich, Computational Scientist, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Microfluidic devices are becoming state-of-the-art in many significant applications including pathogen detection, continuous monitoring, and drug delivery. Numerical algorithms which can simulate flows of complex fluid within these devices are needed for their development and optimization. A method is being developed at Lawrence Livermore National Laboratory by Trebotich et. al. for simulations of DNA-laden flows in complex, microscale geometries such as packed bed reactors and pillar chips. In this method an incompressible Newtonian fluid is discretized with Cartesian grid embedded boundary methods, and the DNA is represented by a bead-rod polymer model. The fluid and polymer dynamics are coupled through a body force.
Clustering Proteins According to Structural Features
Brent Kirkpatrick, University of California, Berkeley
Practicum Year: 2005
Practicum Supervisor: Carol L. Ecale Zhou, Group Leader, Bio-defense Informatics Group, Energy, Environment, and Biology Division (EEBI), Lawrence Livermore National Laboratory
An important goal for structural biology is developing methods that automatically insert new protein structures into the manually created SCOP classification of proteins. Our approach incorporates both clustering and prediction. The clustering methods detect groups of related proteins along with the structural features that they share. Each cluster of structures has a maximal set of shared structural features, or fingerprint. Comparison with the structural fingerprint determines whether a new structure belongs to the cluster. When given a group of proteins and a target protein, the LGA algorithm[1] creates one structural alignment of each protein to the target protein. We use the Gaussian Mixture Model (GMM) to cluster the proteins according to the structural regions they share with the target. Taking each of the proteins in turn as the target yields an ensemble of clusters, multiple partitions on the same set of proteins. Discrepancies are resolved by grouping together proteins that clustered together across many of the partitions. The test data is comprised of structures from the PDB (Protein Data Bank), which has about 23,000 unique structures ranging in resolution from 0.54A (X-ray structures) to greater than 15A (electron microscopy). Despite this noise, the robust nature of our clustering methods detect relationships on the level of the SCOP superfamily with 88% accuracy and a low false positive rate. Future work involves predicting the family and superfamily to which a new structure belongs. Initial results indicate that the fingerprint derived from a SCOP family can predict membership with almost complete accuracy. [1] A. Zemla: "LGA - a method for finding 3D similarities in protein structures", Nucleic Acids Research, Vol. 31, No. 13, 2003, pp. 3370-3374.
Ab initio Monte Carlo simulations of the vapor-liquid coexistance of water
Matthew McGrath, University of Minnesota
Practicum Year: 2005
Practicum Supervisor: Christopher Mundy, Technical Staff, Chemistry and Chemical Engineering Division, Lawrence Livermore National Laboratory
Efficient and highly parallel energy routines in the CP2K code were combined with Monte Carlo algorithms to explore mechanical and thermal properties of various quantum-mechanical descriptions of water, in an attempt to find a model that agrees well with experimental data. This is the first step in a project that will examine atmospherically relevant nucleation phenomena (the formation of acid rain) using an ab initio approach. The use of ab initio methods is desirable because they can cope with bond breakage and bond formation, and classical force fields, while significantly less expensive, are not able to reproduce a wide range of properties at a variety of state points.
Locally-optimal methods to solve eigenvalue problems in electronic structure calculations
Kristopher Andersen, University of California, Davis
Practicum Year: 2004
Practicum Supervisor: John E. Pask, Postdoctoral Research Physicist, H Division: Metals and Alloys Group, Lawrence Livermore National Laboratory
I implemented a new locally-optimal method, recently developed by Andrew Knyazev and Richard Lehoucq, to solve large, sparse generalized eigenvalue problems using conjugate gradient (or steepest decent) iteration. The subroutine I wrote is now being used in a novel electronic structure code, designed to study some of the largest physical systems possible from first-principles.
A Piecewise Linear Finite Element Discretization of the Diffusion Equation
Teresa Bailey, Texas A&M University
Practicum Year: 2004
Practicum Supervisor: Michael Zika, , , Lawrence Livermore National Laboratory
This goal of this project was to develop and implement a piecewise linear (PWL) finite element discretization of the photon diffusion equation for the KULL software project at Lawrence Livermore National Laboratory. The piecewise linear basis functions have the potential to solve the discretized diffusion equation on arbitrary polyhedral meshes, while generating a symmetric positive definite coefficient matrix to invert. Another goal of this project is to compare the PWL method to the existing method in KULL for computational performance and accuracy.
Incorporating Electrokinetic Effects into the EBNavierStokes Embedded Boundary Incompressible Fluid Solver
Kevin Chu, Massachusetts Institute of Technology
Practicum Year: 2004
Practicum Supervisor: David Trebotich, Computational Scientist, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Motivated by the recent interest in using electrokinetic effects within microfluidic devices [1], we have extended the EBNavierStokes embedded boundary incompressible fluid solver [2] to be able to handle electrokinetic effects. With this added functionality, the code will become more useful for understanding and designing microfluidic devices that take advantage of these effects (e.g. pumping and mixing). References: (1) T. M. Squires and M. Z. Bazant, Induced-charge electro-osmosis, J. Fluid Mech., 509 (2004), pp.~217--252. (2) D. Trebotich, Working Notes for Higher-Order Projection in Embedded Boundary Framework, private communication.
Computation of Cellular Detonation
Brian Taylor, University of Illinois at Urbana-Champaign
Practicum Year: 2004
Practicum Supervisor: Bill Henshaw, , Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
My project was to simulate two-dimensional cellular detonation in a rectangular channel using a uniform mesh code and using Overture, an adaptive mesh code developed at Lawrence Livermore. From these results, I hoped to make a comparison between the computational requirements, accuracy, and numerical properties of the two approaches.
Further Work on a Parallel, Adaptive Implementation of the Immersed Boundary Method using SAMRAI and PETSc
Boyce Griffith, New York University
Practicum Year: 2003
Practicum Supervisor: Richard Hornung, Dr., Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
The Immersed Boundary (IB) method provides a mathematical and computational framework for addressing problems involving fluid-structure interaction and has proved to be especially useful in simulating biological fluid dynamics. Due to the localized nature of boundary layers and the stiffness of the discretized equations, "realistic" simulations using the IB method tend to require very high spatial resolution and very small timesteps. Consequently, high performance computing is an important component of simulation research employing the IB method. During my first practicum at Lawrence Livermore during the summer of 2002, I developed parallel, non-adaptive IB software using the SAMRAI framework. For my second practicum, I worked on extending this software to support structured adaptive mesh refinement (AMR) and designed an infrastructure for developing efficient implicit timestepping for the IB method. The AMR work relied on composite-grid elliptic solvers developed at LLNL, while the work on implicit timestepping involved re-implementing major sections of the IB software using PETSc. There are currently a few remaining issues which must be addressed before the AMR IB software is ready for use in non-trivial simulations. Once these are addressed, we intend to use this software with the fully 3D "Courant heart model" of Peskin and McQueen to model blood-tissue interaction in the beating mammalian heart. The facilities for performing implicit timestepping should provide a useful research platform for exploring implicit methods for solving the IB equations in time. References: S. Balay, K. Buschelman, W. D. Gropp, D. Kaushik, M. Knepley, L. C. McInnes, B. F. Smith, and H. Zhang (2001), PETSc homepage: http://www-unix.mcs.anl.gov/petsc. R. Hornung and S. Kohn (2002), Managing Application Complexity in the SAMRAI Object-Oriented Framework, Concurrency and Computation: Practice and Experience 14 347--368. C. S. Peskin (2002), The immersed boundary method, Acta Numerica 1--39. A. M. Roma, C. S. Peskin and M. J. Berger (1999), An adaptive version of the immersed boundary method, Journal of Computational Physics 153 509--534.
Molecular dynamics study of plasticity and bending stiffness in thin films
Matt Fago, California Institute of Technology
Practicum Year: 2002
Practicum Supervisor: Robert Rudd, Physicist, PAT/H-Division, Lawrence Livermore National Laboratory
Obtained dislocation structures and explored the thickness dependence for void growth in thin copper films. Extended the local MD code to allow loading thin single crystal films in bending at fixed temperature. The goal of this project was to look at the effect of temperature and thickness on the properties of very thin films. These results were then to be used to determine simplified constitutive models applicable at very small length scales for use in designing MEMS.
A Parallel, Adaptive Implementation of the Immersed Boundary Method using SAMRAI
Boyce Griffith, New York University
Practicum Year: 2002
Practicum Supervisor: Richard Hornung, Dr., Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
The Immersed Boundary (IB) method provides a mathematical and computational framework for addressing problems involving fluid-structure interaction and has proved to be especially useful in simulating biological fluid dynamics. Realistic simulations which use the IB method require both high spatial resolution and very small timesteps. Consequently, high performance computing is an important component of simulation research employing the IB method. The IB method specifies the interaction of a fluid, described as an Eulerian variable, and an elastic material, described as a Lagrangian variable. Consequently, the fluid is typically discretized on a Cartesian grid, while the elastic material is described by a network of Lagrangian points. A smoothed approximation to the Dirac delta function is used to connect these two quantities. Through the discrete delta function, quantities such as velocity may be interpolated from the Cartesian grid to the Lagrangian points, and quantities such as force or material density may be spread from the Lagrangian mesh to the Cartesian grid. At this time, single level (i.e. non-AMR) IB software is nearly complete. This software should allow for IB computations which make effective use of distributed-memory computational facilities. We intend to continue to extend the software to handle the full 3D "Courant heart model" of Peskin and McQueen. Ultimately, we hope to be able to use AMR with this model and to couple the mechanical model with a realistic electrical model. References: R. Hornung and S. Kohn (2002), Managing Application Complexity in the SAMRAI Object-Oriented Framework. Concurrency and Computation: Practice and Experience 14 347--368. C. S. Peskin (2002), The immersed boundary method, Acta Numerica 1--39. A. M. Roma, C. S. Peskin and M. J. Berger (1999), An adaptive version of the immersed boun
Utilizing the SAMRAI framework to write adaptive mesh refinement, distributed memory, parallel PDE solvers
Elijah Newren, University of Utah
Practicum Year: 2002
Practicum Supervisor: Richard Hornung, , CASC, Applied Mathematics Group, Lawrence Livermore National Laboratory
My goal at LLNL was to learn more about parallel, distributed computing and adaptive mesh refinement, specifically using the SAMRAI framework. Computations on high performance machines will be heavily involved in my thesis, but is not an area of expertise for my research group at the UofU. My project basically had three parts: (1) to learn SAMRAI and how it could assist with communication, grid setup, load balancing, and so forth; (2) to learn how to organize a large code like the one I want to tackle with fluid dynamics, coupled fluid and (deformable)-structure interactions, chemical and cell transport, and chemical reactions occurring within the fluid and on cell surfaces; and (3) to begin writing such a code in C++ and Fortran.
Overture:Object oriented tools for solving computational fluid dynamics and combustion problems in complex moving geometry.
Nathan Crane, University of Illinois at Urbana-Champaign
Practicum Year: 2001
Practicum Supervisor: Dan Quinlan, , Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Particle-In-Cell (PIC) methods have a long history in plasma simulation.
Justin Koo, University of Michigan
Practicum Year: 2001
Practicum Supervisor: A. Bruce Langdon, , X-Division, L-038, Lawrence Livermore National Laboratory
One code using PIC methods is ZOHAR, written by Bruce Langdon and Barbara Lasinski at LLNL in the late 1970s. This code, many revisions later, is a proven testbed for studying laser-plasma interaction (LPI). My practicum work was to implement collisions, specifically i-e collisions, into his code so collisional LPI problems could be modelled. After doing background research, I decided to first implement an older, established collision model developed by Takizuki and Abe. It is based on simulating the collision integral by charged particle pairing and collision at each timestep. Although slow and computationally expensive, it provides a useful reference model with which to compare newer and faster collision models. The next step in my research was to gather information on new models. Two of these were a grid-based collision model by Jones et. al. at LANL and a Direct Simulation Monte Carlo (DSMC) approach. Unfortunately, by this point, the practicum was over and it was time to head back to thesis work.
Modeling Chemical Kinetics in Turbulent Combustion Simulations Using Overture
Diem-Phuong Nguyen, University of Utah
Practicum Year: 2001
Practicum Supervisor: Bill Henshaw, , CASC, Lawrence Livermore National Laboratory
The research objective of reaction modeling is to computationally link large chemical kinetic mechanisms to turbulent combustion computations. The link must incorporate small scale chemistry into large scale components of the turbulent flow and systematically reduce the degrees of freedom of the system. The bridging between microscopic details to macroscopic domain is achieved through introduction of a subgrid scale (sgs) reaction model. Thus, my summer research at LLNL involved using the Overture framework to incorporate an sgs reaction model to the OverBlown Navier-Stokes solver to simulate a turbulent open pool fire.
Numerical Simulation of the Multi-band Hubbard Model
Robert Sedgewick, University of California, Santa Barbara
Practicum Year: 2001
Practicum Supervisor: Andrew McMahan, Staff Scientist, H Division, Lawrence Livermore National Laboratory
The long term goal for the group that I worked with is the simulation the multi-band Hubbard model. Substantial work has gone into studying the single-band Hubbard model, but studying the multi-band version is difficult due to the poor caling of the standard single-band algorithms as the lattice size is increased. The project that I worked on was to look at using a novel algorithm for the simulation of the single-band Hubbard model with the hope that this algorithm will scale better and can be used in the multi-band model. This lgorithm can also be useful for other computationally intensive Hubbard model variants.
Overture:Object oriented tools for solving computational fluid dynamics and combustion problems in complex moving geometry.
Nathan Crane, University of Illinois at Urbana-Champaign
Practicum Year: 2000
Practicum Supervisor: Dan Quinlan, , , Lawrence Livermore National Laboratory
Positronium wavefunctions in bulk solid
Michael Feldmann, California Institute of Technology
Practicum Year: 2000
Practicum Supervisor: Giulia Galli and Randy Hood, Quantum Monte Carlo Development, H, Lawrence Livermore National Laboratory
We examined the conditions imposed on a properly symmetrized wavefunction when a positron is added. By building in explicit particle-particle correlation we allow for the formation of positronium. Positronium formation/annihilation in solids is a very effective probe to analyze the defects which exist in solids. These wavefunctions will be examined using QMC (Quantum Monte Carlo) techniques.
A Lagrangian Based Higher Order Godunov Scheme for the Euler Equations.
Charles Hindman, University of Colorado
Practicum Year: 2000
Practicum Supervisor: Rick Pember, , , Lawrence Livermore National Laboratory
This project involved creating a fluid dynamics code to investigate a new integration scheme. The code was created from "scratch" by using the developement framework of C++ and Overture, an adaptive mesh development and visualization package being developed at CASC.
Enhancing the embedded boundary capabilities of SAMRAI.
Jason Hunt, University of Michigan
Practicum Year: 2000
Practicum Supervisor: Rick Pember, Ph.D., , Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Embedded boundary capabilities for rectangular coordinates in two and three space dimensions was achieved. Furthermore, this capability was coupled with adaptive mesh techniques.
The Validity of Paraxial Approximations in the Simulation of Laser-Plasma Interactions
Edward Hyde, California Institute of Technology
Practicum Year: 2000
Practicum Supervisor: Milo Dorr and Xabier Garaizar, , , Lawrence Livermore National Laboratory
High-intensity lasers such as those used in inertial confinement fusion produce high-density plasmas, which interact with the propagating light. Solving the Helmholtz equation to compute the laser scattering remains too difficult on large computational domains. Hence, one often employs a paraxial approximation to increase efficiency. This work sought to establish the domain of validity for various paraxial approximations.
Coupling Reaction Models in Overture for CFD simulations involving complex geometry.
Diem-Phuong Nguyen, University of Utah
Practicum Year: 2000
Practicum Supervisor: William Henshaw, , CASC, Lawrence Livermore National Laboratory
My summer research at LLNL involved using the Overture framework to produce computational fluid dynamics (cfd) simulations for complex geometries. An Overture code was written to solve a system of reacting species PDE's on any composite grid using both explicit and implicit time stepping techniques. The code was coupled to a subgrid scale reaction model which provided the reaction source term and accounted for the complex chemical kinetics. Thus, this allowed me to bridge microscopic details to macroscopic domain.
2D and 3D models of a rabbit sinoatrial node cell using Overture and CVODE.
Christopher Oehmen, University of Memphis/University of Tennessee, HSC
Practicum Year: 2000
Practicum Supervisor: Anders Petersson, Applied Mathematician, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
The project involved converting existing rabbit SAN model so that it incorporated diffusion in 2D and 3D domains which necessitated the use of a flow-solver: Overture. CVODE was used to integrate the nonlinear systen of ODE's which were present on the cell membrane.
Large-Scale Data Mining & Pattern Recognition
Matthew Giamporcaro, Boston University
Practicum Year: 1999
Practicum Supervisor: Dr. Chandrika Kamath, , , Lawrence Livermore National Laboratory
By applying and extending ideas from data mining and pattern recognition, we are developing...computational tools and techniques that will be used to improve the way in which scientists extract useful information from data."
A Portable and Parallel Code for Modeling Multiphase Flow
Jeffrey Butera, North Carolina State University
Practicum Year: 1995
Practicum Supervisor: Dr. Steve Ashby, , Center for Computational Sciences & Engineering, Lawrence Livermore National Laboratory
My work entailed model validation of Par Flow.
Flexible Communication Mechanisms for Block-Structured Applications
Stephen Fink, University of California, San Diego
Practicum Year: 1995
Practicum Supervisor: Dr. Charles Rendleman, , Center for Computational Sciences & Engineering, Lawrence Livermore National Laboratory
I developed a C++ class library to assist in implementing adaptive finite-difference methods on multicomputers.
Linear System Solving in Lanczos-Based Model Reduction
Eric Grimme, University of Illinois at Urbana-Champaign
Practicum Year: 1995
Practicum Supervisor: Dr. Steven Ashby, , Center for Computational Sciences & Engineering, Lawrence Livermore National Laboratory
Iterative methods were explored as an approach for solving the large-scale linear systems which arise in the modelling of large-scale dynamical systems via Lanczos-type methods.
Completion and Benchmarking of the GT-SCALE Code
Todd Postma, University of California, Berkeley
Practicum Year: 1995
Practicum Supervisor: Dr. Jor-Shan Choi, , Fission Energy and Systems Safety Program, Lawrence Livermore National Laboratory
GTRAN is a computer program which solves the neutron transport equation in arbitrary two-dimensional geometries using a collision probability method. SCALE is a collection of computer programs used to process cross sections; perform simple, diffusion theory based, criticality calculations; and perform burnup calculations. This project aimed at completing the coupling of these two codes and benchmarking the resulting GT-SCALE computer code.
An Adaptive Projection Method for Modeling Mesoscale Atmospheric Flows
Daniel Martin, University of California, Berkeley
Practicum Year: 1994
Practicum Supervisor: Dr. John Bell, , Center for Computational Sciences & Engineering, Lawrence Livermore National Laboratory
The project goal is to extend the numerical methods developed for modeling incompressible flows to moist atmospheric flows, using the anelastic approximation.
Modeling a Fixed Bed Reactor
Rick Propp, University of California, Berkeley
Practicum Year: 1994
Practicum Supervisor: Dr. John Bell, , Center for Computational Sciences & Engineering, Lawrence Livermore National Laboratory
Fixed bed reactors are widely used in the chemical and petroleum industries, yet their behavior is not well understood. The goal of this project is to use advanced numerical techniques to model a reactor, so that what goes on inside the reactor can be better understood.
Implications of Asymmetric Mass Loss on the Number of Comets Around Other Single Stars
Joel Parriott, University of Michigan
Practicum Year: 1993
Practicum Supervisor: Dr. Charles Alcock, , Institute Geophysics & Planetary Physics, Lawrence Livermore National Laboratory
An investigation of how asymmetric mass loss of a central star might affect a 'cloud-like' comet distribution around the progenitor, and thus help place limits for comet accretion around the white dwarf.