Deep Reinforcement Learning for Large-Eddy Simulation Subgrid-Scale Modeling in Turbulent Channel Flow
ORAL
Abstract
The need for high-precision and high-efficiency simulation naturally leads to turbulence modeling, which is challenging due to the inevitable trade-off between accuracy and cost. Recently, artificial intelligence, mainly deep neural network (DNN), has been actively tested with the expectation of high performance compared to the existing model. Although the classical supervised learning models require expensive data for training, they do not show as good performance as expected. To overcome this, we adopted deep reinforcement learning (DRL), one of the online learning algorithms, for subgrid-scale (SGS) modeling of large-eddy simulation (LES) in turbulent channel flow. Here, DRL uses a reward function defined by a solution of LES for training and can be carried out using only statistics as given information. Through this, we trained a DNN model that produces local SGS stresses from resolved local velocity gradients with constraints of physical invariance. As a result, we found that in several simulation conditions it is possible to find a SGS model that accurately predicts given target statistics, the mean velocity and mean Reynolds shear stress profiles, showing the potential of DRL for turbulence modeling. Now, we are extending it to develop a general SGS model.
–
Presenters
-
Junhyuk Kim
Yonsei University
Authors
-
Junhyuk Kim
Yonsei University
-
Hyojin Kim
Yonsei University
-
Jiyeon Kim
Yonsei University
-
Changhoon Lee
Yonsei University