Optimizing wave energy converter performance using model-free reinforcement learning algorithms
ORAL
Abstract
Several critical challenges in ocean wave energy harvesting can be addressed by designing a robust controller to optimize the wave energy converter (WEC) performance under changing sea states and WEC dynamics in its lifespan. In contrast to the renowned model-based controllers, this can be done using model-free reinforcement learning (RL) techniques. We present the optimization of a cylindrical point absorber WEC device using deep Q-network (DQN) and double DQN (DDQN) controls. These RL algorithms use deep neural networks (DNN) as function approximators to decide the optimal control action that maximizes the power absorption. The RL agent trains the DNN using experiences generated by interacting with the WEC environment. The environment is simulated using a linear dynamical model of the device, derived using the linear potential theory (LPT). Multiple independent environments are simulated in parallel using the message passing interface (MPI) to generate experiences and train the agent faster. Once trained, the agent drives the device to optimal performance. Next, the RL agent is employed in computational fluid dynamics (CFD) based WEC simulations that fully resolve the nonlinear wave structure interaction (WSI) phenomenon and produces device dynamics closer to reality.
–
Presenters
-
Kaustubh M Khedkar
San Diego State University
Authors
-
Kaustubh M Khedkar
San Diego State University
-
Amneet Pal S Bhalla
San Diego State University