Neural Networks as an Alternative to Model Order Reduction

Cristina White, Stanford University

Photo of Cristina White

When engineers and scientists run finite element or computational fluid dynamics simulations, one simulation may take hours, days, or weeks. They can't collect as many data points as they want, and what they want is to fully explore some parameter space. What if they could get accurate estimates in minutes or seconds, given the information they already have from the simulations they have already run? The state-of-the-art method that addresses this larger problem in industry is called a reduced order model (ROM). ROMs solve the governing equations after projecting them into a smaller state space, using previous simulations to construct a basis. However, ROMs have issues – they are intrusive, require full knowledge of the model used to generate the dataset, and, for highly nonlinear problems, they may not improve computation time. In this research, a neural network architecture is proposed as an alternative to ROMs, and multiple architectures and methods are explored. To assess the suitability of various architectures, a test case typically used to benchmark ROMs is used – Burgers’ equation – a one-dimensional application of an initial-boundary-value problem, which models the movement of a shockwave across a tube. The problem is found to require a new kind of architecture, inspired partly by the repeating columnar structure of the neocortex. The novel clustered network architecture proposed is a simple feed-forward network, except with distinct connected pairs of function and context networks. The initial results interpolate and extrapolate well on the test case while running faster than state-of-the-art ROMs. The network builds categories from initially undifferentiated input data, and the results are consistent with the hypothesis that, when learning from scratch, and from few examples, a neural network must have a strong bias – in this case, that categorizing and contextualizing are necessary for making good predictions.

Abstract Author(s): Tina White