Visual AnemomeTree: using deep learning to predict wind speeds from videoclips of swaying trees in nature.
ORAL
Abstract
Detailed mapping of the wind is necessary for a variety of engineering applications such as weather forecasting, air pollution dispersion models, and wind turbine siting. Wind measurements in the field are commonly taken using expensive remote sensing instrumentation or less expensive point anemometers that can become prohibitively costly when scaling to full field. Alternatively, the use of preexisting objects in the flow could potentially better the balance between cost and scalability. This work uses deep learning to extract wind speeds from videos of swaying trees collected using a drone. To accomplish this, we generated a dataset that consists of preprocessed images that depict statistical measures of the flow, meaning that each datapoint has temporal information encoded in it. The generalizability of the model to a different tree species will be discussed. These curated statistical inputs provide physical insights of the flow-structure interactions which can later assist in further generalization of the model and improve the physical understanding of flow-structure interactions.
–
Presenters
-
Roni Goldshmid
Caltech
Authors
-
Roni Goldshmid
Caltech
-
John O Dabiri
California Institute of Technology, Caltech