Image Denoising for Acoustically Coupled Combustion using Neural Networks and a Convolutional Autoencoder
ORAL
Abstract
The present study investigates the use of a convolutional autoencoder on dimensionality reduction of a coaxial methane-air laminar jet diffusion flame exposed to transverse forcing within a cylindrical acoustic waveguide. Proper Orthogonal Decomposition (POD) on high-speed imaging of the flame extracts spatial modes of the oscillating flame and is used as a baseline for comparison with a nonlinear mode-decomposing convolutional neural network autoencoder [Murata, et al., JFM, 2020]. This approach allows us to decompose the flow field into nonlinear low-dimensional modes with the use of nonlinear activation functions, unlike POD's linear orthogonal basis of modes. We apply this analysis to imaging data denoised using a multiscale Context Aggregation Network (CAN) [Chen, et. al., IEEE Conf. Comp. Vis., 2017]. This CAN is trained on low exposure input under steady conditions but enables production of a desired output represented by high exposure images for steady and unsteady cases. The CAN approach demonstrates a significant decrease in preprocessing time compared to conventional approaches and is able to process images directly for a range of different experimental conditions and flame dynamics while preserving image quality.
–
Presenters
-
Arin Hayrapetyan
University of California, Los Angeles
Authors
-
Arin Hayrapetyan
University of California, Los Angeles
-
Andres Vargas
University of California, Los Angeles
-
Ann R Karagozian
University of California, Los Angeles