Skip to main content

Julian Bellavita

Headshot of Julian Bellavita
Program Year:
2
University:
Cornell University
Field of Study:
Computer Science
Advisor:
Giulia Guidi
Degree(s):
B.A. Computer Science, University of California, Berkeley, 2023

Summary of Research

Distributed memory parallel programming on heterogeneous HPC clusters is hard, even for computer scientists. For domain scientists, it is even harder. Instead of expecting domain scientists to manually parallelize their applications, it would be better if they could rely on a library of optimized parallel routines to do the job. However, the workloads found in scientific applications are diverse, so how could one library provide enough functionality to parallelize general scientific applications?

By using an algebraic structure known as a semiring, it is possible to express many scientific computations in terms of parallel sparse matrix operations. Thus, through the flexible expressiveness of parallel sparse matrix operations, it is possible to parallelize diverse scientific applications. However, for this strategy to be effective, a library of fast parallel sparse matrix routines is needed.

My research focuses on developing distributed sparse matrix routines uniquely suited for modern exascale supercomputers through the application of novel techniques to reduce communication volume, improve load balancing, and more effectively exploit high-bandwidth interconnects between GPUs on the same compute node.

Publications

Julian Bellavita*, Lorenzo Pichetti*, Thomas Pasquali, Flavio Vella, and Giulia Guidi. 2026. "Communication-Avoiding SpGEMM via Trident Partitioning on Hierarchical GPU Interconnect". In The 40th ACM International Conference on Supercomputing (ICS 2026). *Equal contribution

Julian Bellavita, Matthew Rubino, Nakul Iyer, Andrew Chang, Aditya Devarakonda, Flavio Vella, and Giulia Guidi. 2026. "Communication-Avoiding Linear Algebraic Kernel K-Means on GPUs". In The 40th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2026).

Julian Bellavita, Thomas Pasquali, Laura Del Rio Martin, Flavio Vella, and Giulia Guidi. 2025. "Popcorn: Accelerating Kernel K-means on GPUs through Sparse Linear Algebra". In The 30th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP 2025).

Thomas McFarland, Julian Bellavita, and Giulia Guidi. 2025. "Parallel GPU-Enabled Algorithms for SpGEMM on Arbitrary Semirings with Hybrid Communication". Short paper. In Proceedings of the 16th ACM/SPEC International Conference on Performance Engineering (ICPE 2025).

Adrián Castelló, Julian Bellavita, Grace Dinh, Yuka Ikarashi, Hector Martínez. "Tackling the Matrix Multiplication Micro-Kernel Generation with Exo". In IEEE/ACM International Symposium on Code Generation and Optimization (CGO 2024).

Julian Bellavita, Mathias Jacquelin, Esmond G. Ng, Dan Bonachea, Johnny Corbino, and Paul H. Hargrove. "symPACK: A GPU-Capable Fan-Out Sparse Cholesky Solver". In Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (SC-W 2023).

Julian Bellavita, Caitlin Sim, Kesheng Wu, Alex Sim, Shinjae Yoo, Hiro Ito, Vincent Garonne, Eric Lancon. "Understanding Data Access Patterns for dCache System". In 26th International Conference on Computing in High Energy & Nuclear Physics (CHEP 2023).

Julian Bellavita, Alex Sim, Kesheng Wu, Inder Monga, Chin Guok, Frank Würthwein, and Diego Davila. 2022. "Studying Scientific Data Lifecycle in On-demand Distributed Storage Caches". In Fifth International Workshop on Systems and Network Telemetry and Analytics (SNTA 2022).

Awards

Recipient of Cornell Fellowship, 2023 admissions cycle.

2nd Place, 2022 ACM Student Research Competition, Undergraduate Division.