Skip to Content

Deep Reinforcement Learning Models for High-Frequency Data

Justin Sirignano, University of Illinois at Urbana-Champaign

Usage Details

Justin Sirignano, Xiaobo Dong, Jonathan MacArt, Yite Wang

Deep learning (DL) has revolutionized fields such as image, text, and speech recognition. Motivated by this success, there is growing interest in applying DL to science and engineering. Scientific fields frequently use partial differential equations (PDEs) as their fundamental building block. The goal of the proposed research is to bring together PDEs and DL to advance the state-of-the-art in PDE modeling and simulation.

Our target problem is turbulent combustion, which requires solving the computationally-intensive Navier–Stokes equations. Even for simple configurations, accurately solving the Navier–Stokes PDEs for turbulent combustion—referred to as Direct Numerical Simulation (DNS)—requires vast supercomputing resources. More complex, real-world applications (e.g., an entire jet engine) are typically computationally infeasible even with supercomputers. Instead, reduced-order PDE models (e.g., Large Eddy Simulation or Reynolds-Averaged Navier–Stokes models) are designed as approximations to the fully-resolved DNS physics. These approximations are in many cases inaccurate, particularly for turbulent combustion. There is a strong demand for more accurate turbulent combustion models for many commercial and national defense applications (e.g., aircraft and rocket propulsion, UAVs, and scramjets).

We have developed deep learning methods for estimating reduced-order PDE models from data. The new 2019-2020 Blue Waters allocation would be used to generate turbulent combustion DNS datasets which will be used to train and evaluate the deep learning, reduced order PDE models. 964,960 node-hours are requested for the new allocation.

Using 2018-2019 Blue Waters allocations, we trained and evaluated our deep learning, reduced-order PDE models on DNS datasets for the Navier–Stokes equations (with no combustion). In out-of-sample testing, the deep learning models outperformed traditional Large Eddy Simulation models such as Smagorinsky and Dynamic Smagorinsky. The efficiency and scalability of our code has been studied using these previous allocations, and we are now ready for larger-scale computations and more physically complex target problems.