A Performance Analysis of SIMD Algorithms for Monte Carlo Simulations of Nuclear Reactor Cores

David Ozog, University of Oregon

Photo of David Ozog

A primary characteristic of history-based Monte Carlo neutron transport simulation is the application of MIMD-style parallelism: the path of each neutron particle is largely independent of all other particles, so threads of execution perform independent instructions with respect to other threads. This conflicts with the growing trend of HPC vendors exploiting SIMD hardware, which accomplishes better parallelism and more FLOPS per watt. Event-based neutron transport suits vectorization better than history-based transport, but it is difficult to implement and complicates data management and transfer. However, the Intel Xeon Phi architecture supports the familiar x86 instruction set and memory model, mitigating difficulties in vectorizing neutron transport codes. This poster compares the event-based and history-based approaches for exploiting SIMD in Monte Carlo neutron transport simulations. For both algorithms, we analyze performance using the three different execution models provided by the Xeon Phi (offload, native and symmetric) within the full-featured OpenMC framework. A representative micro-benchmark of the performance bottleneck computation shows about 10x performance improvement using the event-based method. In an optimized history-based simulation of a full-physics nuclear reactor core in OpenMC, the MIC shows a calculation rate 1.6x higher than a modern 16-core CPU, 2.5x higher when balancing load between the CPU and 1 MIC, and 4x higher when balancing load between the CPU and 2 MICs. As far as we are aware, our calculation rate per node on a high-fidelity benchmark (17,098 particles/second) is higher than any other Monte Carlo neutron transport application. Furthermore, we attain 95 percent distributed efficiency when using MPI and up to 512 concurrent MIC devices.

Abstract Author(s): David Ozog, Allen D. Malony, Andrew R. Siegel