Thomas Jefferson National Accelerator Facility

Coordinator: Jie Chen

Review abstracts for current and past practicum experiences at Jefferson Lab >>

Thomas Jefferson National Accelerator Facility (Jefferson Lab) provides scientists worldwide the lab’s unique particle accelerator — the Continuous Electron Beam Accelerator Facility (CEBAF) — to probe the most basic building blocks of matter by conducting research at the frontiers of nuclear physics (NP) and related disciplines. In addition, the lab capitalizes on its unique technologies and expertise to perform advanced computing and applied research with industry and university partners, and provides programs designed to help educate the next generation in science and technology.

Majority of computational science activities in Jefferson Lab focus on these areas: large scale and numerical intensive Lattice Quantum Chromodynamics (LQCD) calculations, modeling and simulation of accelerators and the experiment detectors, fast data acquisition and streaming data readout, high throughput computing for data analysis of experimental data, and large scale distributed data storage and management.

Many Jefferson Lab scientists and staffs lead or actively participate the computational efforts in the above areas. Among those are computer/computational scientists and computer professionals from newly formed computational sciences and technology division (CST), physicists from physics division and the Center for Theoretical and Computational Physics, and accelerator physicists from Center for Advanced Studies of Accelerators (CASA). In addition, collaborations with universities and industrial partners further research and development in computational science.

Jefferson Lab maintains state of art computing resources onsite: a high performance computing cluster consisting of 444 Intel KNL nodes; a GPU cluster hosting 256 NVIDIA RTX 2080 GPUs; a computing cluster utilizing several generations of Intel and AMD CPUs totaling more than 12000 cores. DOE CSGF students will utilize these resources to carry out their research in the specific areas described below:

Lattice QCD Calculation

Lattice QCD is a well-established non-perturbative approach to solving the quantum chromodynamics (QCD) theory of quarks and gluons. It is a lattice gauge theory formulated on a grid or lattice of points in space and time. LQCD computing has been a driver of high performance computing (HPC) for several decades, and has innovated in the algorithmic, hardware, and performance space with an impact that has reached substantially beyond the field. In particular, Jefferson Lab LQCD efforts concentrate on exploring the spectra of baryon and meson states and structure of hadrons utilizing novel numerical algorithms with highly efficient implementation targeting latest computing architectures. Moreover, Jefferson Lab LQCD team is one of the leading participants in Exascale Computing Project (ECP) and SciDAC programs. Specifically, DOE CSGF students can take part, but not limited to, in the following computational research areas: eigenvalue solver algorithms, tensor contractions, iterative linear system solvers, multigrid methods, heterogeneous computing, and performance portability study.

Accelerator Modeling

CASA and Jefferson Lab SRF institute focus on advanced algorithms, such as fast multipole methods, for multiparticle accelerator dynamics simulations, artificial intelligence (AI) and machine learning (ML) applied to superconducting RF (SRF) accelerator operations, and integrated large and multi-scale modeling of SRF accelerator structures. These areas will be an essential part of a national strategy to optimize DOE operational facility investments, and to strengthen Jefferson Lab’s core competency of world-leading SRF advanced design and facility operations. Especially current active simulation projects like electron cooling, intra-beam scattering, and coherent synchrotron radiation present diverse research domains ranging from numerical algorithms development to parallel computing.

Streaming Data Readout

With tremendous advancement in micro-electronics and computing technologies in the last decade, many nuclear physics and high-energy physics experiments are taking advantage of these developments by upgrading their existing triggered data acquisition to a streaming readout model (SRO), whereby detectors are continuously read out in parallel streams of data. An SRO system, which could handle up to 100 Gb/s data throughput, provides a pipelined data analysis model to nuclear physics experiments where data are analyzed and processed in near real-time fashion. Jefferson Lab is leading a collaborative research and development effort to devise SRO systems not only for CEBAF 12GeV experiments but also for the upcoming EIC facility. SRO development offers DOE CSGF students some exciting research areas such as network protocol design, high-speed data communication, high-performance data compression and distributed computing.

Physics Data Analysis

Analysis of data from modern particle physics experiments uses technically advanced programming and computing techniques to handle the large volumes of data. One not only needs to understand aspects of parallel programming using modern languages such as C/C++, Java, and Python, but also must incorporate knowledge of experimental techniques involving error propagation and estimation in order to properly interpret the results. Aspects of this range from writing a single algorithm used in event reconstruction, to using the collection of algorithms written by others, to managing campaigns at HPC facilities that apply these algorithms to large datasets. Detector calibrations and final physics analysis are also significant parts of the analysis chain. DOE CSGF students could participate in any of these areas.

Machine Learning

Rapid developments in hardware computational power and an ever increasing set of data has led to explosive growth in machine learning techniques, specifically deep learning techniques. These techniques threaten to change just about every facet of modern life and nuclear physics is no exception. At Jefferson Lab machine learning is being developed for every step in the physics workflow. To deliver beam to the experimental halls the accelerator relies on radio frequency (RF) cavities to accelerate the electrons. Occasionally these cavities, of which there are over 400 in operation around the accelerator, fault which disrupts the delivery of the beam to experiments. To quickly identify and diagnose cavity faults AI is being developed and deployed. Experiments themselves are developing and/or deploying AI to monitor detector performance, decide what data to keep, reconstruct detector responses, simulate the detectors, and even to analyze collected data. With the active development of machine learning tools and techniques Jefferson Lab hopes to drive nuclear physics research forward, enabling physicists to more quickly obtain and analyze high quality data.