APS Logo

Turbulence Model Development based on a Novel Method Combining Gene Expression Programming and Artificial Neural Network

ORAL

Abstract

Data-driven methods have been widely used for developing physical models. Compared with deep learning methods that usually provide "black-box" models, evolutionary algorithms like gene expression programming (GEP) focus on finding explicit model equations via symbolic regression. However, the optimizing process in GEP usually cause issues of slow convergence, facing difficulties in finding accurate model coefficients. Combining GEP which has global searching capabilities and neural networks (NNs) for gradient optimization, we propose a novel method called gene expression programming neural network (GEPNN). In the GEPNN training iteration, candidate models are first optimized in the GEP framework. Then selected GEP models are expressed and optimized as NNs, after which they are transformed back to the GEP framework for the next iteration. This method has been first tested to develop different physical laws, showing that GEPNN converges fast to models with precise constant coefficients. Furthermore, GEPNN is applied to model subgrid-scale stress for large-eddy simulation of turbulence. The GEPNN model shows significant improvements in predicting turbulence statistics and flow structures in a posteriori tests.

Publication: [1] I. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT press, 2016. <br>[2] J. Ling, A. Kurzawski, J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, J. Fluid Mech. 807 (2016) 155–166. <br>[3] C. Xie, Z. Yuan, J. Wang, Artificial neural network-based nonlinear algebraic models for large eddy simulation of turbulence, Phys. Fluids 32 (11) (2020) 115101. <br>[4] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys. 378 (2018) 686–707. <br>[5] X. Yang, S. Zafar, J.-X. Wang, H. Xiao, Predictive large-eddy-simulation wall modeling via physics-informed neural networks, Phys. Rev. Fluids 4 (3) (2019) 034602. <br>[6] Q. Liu, J. Niu, P. Lu, F. Dong, F. Zhou, X. Meng, W. Xu, S. Li, B. X. Hu, Interannual and seasonal variations of permafrost thaw depth on the qinghai-tibetan plateau: A comparative study using long short-term memory, convolutional neural networks, and random forest, Sci. Total Environ. 838 (2022) 155886. <br>[7] S. Sahoo, C. H. Lampert, G. Martius, Learning equations for extrapolation and control, in: International Con ference on Machine Learning, 2018, pp. 4442–4450. <br>[8] S. Kim, P. Y. Lu, S. Mukherjee, M. Gilbert, M. Soljacic, Integration of neural network-based symbolic regression in deep learning for scientific discovery, IEEE Trans. Neural Netw. Learn. Syst. 32 (9) (2020) 4166–4177. <br>[9] S.-M. Udrescu, M. Tegmark, Ai feynman: A physics-inspired method for symbolic regression, Sci. Adv. 6 (16) (2020) eaay2631. <br>[10] B. K. Petersen, M. L. Larma, T. N. Mundhenk, C. P. Santigago, S. K. Kim, J. T. Kim, Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients, in: International Conference on Learning Representations, 2020.<br>[11] L. Biggio, T. Bendinelli, A. Neitz, A. Lucchi, G. Parascandolo, Neural symbolic regression that scales, in: International Conference on Machine Learning, 2021, pp. 936–945. <br>[12] J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press, 1975. <br>[13] J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT press, 1992. <br>[14] M. Schmidt, H. Lipson, Distilling free-form natural laws from experimental data, Sci. 324 (5923) (2009) 81–85. <br>[15] C. Ferreira, Gene expression programming: A new adaptive algorithm for Solving Problems, Complex Syst. 13 (2) (2001) 87–129. <br>[16] J. Weatheritt, R. D. Sandberg, A novel evolutionary algorithm applied to algebraic modifications of the rans stress strain relationship, J. Comput. Phys. 325 (2016) 22–37. <br>[17] H. Li, Y. Zhao, J. Wang, R. D. Sandberg, Data-driven model development for large- eddy simulation of turbu lence using gene- expression programing, Phys. Fluids 33 (2021) 125–127. <br>[18] M. A. Khan, S. A. Memon, F. Farooq, M. F. Javed, F. Aslam, R. Alyousef, Compressive strength of fly-ash based geopolymer concrete by gene expression programming and random forest, Adv. Civ. Eng. 2021 (2021) 6618407. <br>[19] M. A. Khan, A. Zafar, A. Akbar, M. F. Javed, A. H. Mosavi, Application of gene expression programming (gep) for the prediction of compressive strength of geopolymer concrete, Mater. 14 (5) (2021) 1106. <br>[20] M. Cranmer, A. Sanchez Gonzalez, P. Battaglia, R. Xu, K. Cranmer, D. Spergel, S. Ho, Discovering symbolic models from deep learning with inductive biases, in: Advances in Neural Information Processing Systems, 2020, pp. 17429–17442. <br>[21] B. He, Q. Lu, Q. Yang, J. Luo, Z. Wang, Taylor genetic programming for symbolic regression, ArXiv (2022) arXiv:2205.09751. <br>[22] J. E. Dennis, J. J. More, Quasi-newton methods, motivation and theory, Siam Review 19 (1) (1977) 46–89. <br>[23] M. R. Segal, Machine learning benchmarks and random forest regression, Biostat. (2004) 1–14. <br>[24] X. Glorot, A. Bordes, Y. Bengio, Deep sparse rectifier neural networks, in: Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 2011, pp. 315–323. <br>[25] J. Duchi, E. Hazan, Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res. 12 (2011) 2121–2159. <br>[26] C. H. Kautz, P. R. L. Heron, M. E. Loverude, L. C. Mcdermott, Student understanding of the ideal gas law, part i: A macroscopic perspective, Am. J. Phys. 73 (11) (2005) 1055–1063. <br>[27] I. Newton, A. E. Shapiro, The Principia: Mathematical Principles of Natural Philosophy, Univ. of California Press, 1999. <br>[28] S. B. Pope, Turbulent flows, Cambridge university press, 2000. <br>[29] J. Graham, K. Kanov, X. I. A. Yang, M. Lee, C. Meneveau, A web services accessible database of turbulent channel flow and its use for testing a new integral wall model for les, J. Turbul. 17 (2) (2016) 181–215, doi: 10.7281/T10K26QW. <br>[30] E. Perlman, R. Burns, Y. Li, C. Meneveau, Data exploration of turbulence simulations using a database cluster, in: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, 2007, pp. 1–11. <br>[31] Y. Li, E. Perlman, M. Wan, Y. Yang, C. Meneveau, R. Burns, S. Chen, A. Szalay, G. Eyink, A public turbulence database cluster and applications to study lagrangian evolution of velocity increments in turbulence, J. Turbul. 9 (31) (2008) 1–29. <br>[32] J. Smagorinsky, General circulation experiments with the primitive equations: I. the basic experiment, Mon. Weather Rev. 91 (3) (1963) 99–164. <br>[33] P. Moin, K. Squires, W. Cabot, S. Lee, A dynamic subgrid-scale model for compressible turbulence and scalar transport, Phys. Fluids 3 (11) (1991) 2746–2757. <br>[34] S. Liu, C. Meneveau, J. Katz, On the properties of similarity subgrid-scale models as deduced from measure ments in a turbulent jet, J. Fluid Mech. 275 (1994) 83–119. <br>[35] R. A. Clark, J. H. Ferziger, W. C. Reynolds, Evaluation of subgrid-scale models using an accurately simulated turbulent flow, J. Fluid Mech. 91 (1) (1979) 1–16. <br>[36] M. Buzzicotti, M. Linkmann, H. Aluie, L. Biferale, J. Brasseur, C. Meneveau, Effect of filter type on the statistics of energy transfer between resolved and subfilter scales from a-priori analysis of direct numerical simulations of isotropic turbulence, J. Turbul. 19 (2) (2018) 167–197.

Presenters

  • Haochen Li

    Peking Univ

Authors

  • Haochen Li

    Peking Univ

  • Yaomin Zhao

    Peking University, Peking Univ