TY - JOUR
T1 - Applying deep reinforcement learning to active flow control in weakly turbulent conditions
AU - Ren, Feng
AU - Rabault, Jean
AU - Tang, Hui
N1 - Funding Information:
Feng Ren and Hui Tang gratefully acknowledge financial support from the Research Grants Council of Hong Kong under the General Research Fund (Project Nos. 15249316 and 15214418). Jean Rabault acknowledges funding obtained through the Petromaks II project (Grant No. 280625).
Publisher Copyright:
© 2021 Author(s).
PY - 2021/3/1
Y1 - 2021/3/1
N2 - Machine learning has recently become a promising technique in fluid mechanics, especially for active flow control (AFC) applications. A recent work [Rabault et al., J. Fluid Mech. 865, 281-302 (2019)] has demonstrated the feasibility and effectiveness of deep reinforcement learning (DRL) in performing AFC over a circular cylinder at Re = 100, i.e., in the laminar flow regime. As a follow-up study, we investigate the same AFC problem at an intermediate Reynolds number, i.e., Re = 1000, where the weak turbulence in the flow poses great challenges to the control. The results show that the DRL agent can still find effective control strategies, but requires much more episodes in the learning. A remarkable drag reduction of around 30% is achieved, which is accompanied by elongation of the recirculation bubble and reduction of turbulent fluctuations in the cylinder wake. Furthermore, we also perform a sensitivity analysis on the learnt control strategies to explore the optimal layout of sensor network. To our best knowledge, this study is the first successful application of DRL to AFC in weakly turbulent conditions. It therefore sets a new milestone in progressing toward AFC in strong turbulent flows.
AB - Machine learning has recently become a promising technique in fluid mechanics, especially for active flow control (AFC) applications. A recent work [Rabault et al., J. Fluid Mech. 865, 281-302 (2019)] has demonstrated the feasibility and effectiveness of deep reinforcement learning (DRL) in performing AFC over a circular cylinder at Re = 100, i.e., in the laminar flow regime. As a follow-up study, we investigate the same AFC problem at an intermediate Reynolds number, i.e., Re = 1000, where the weak turbulence in the flow poses great challenges to the control. The results show that the DRL agent can still find effective control strategies, but requires much more episodes in the learning. A remarkable drag reduction of around 30% is achieved, which is accompanied by elongation of the recirculation bubble and reduction of turbulent fluctuations in the cylinder wake. Furthermore, we also perform a sensitivity analysis on the learnt control strategies to explore the optimal layout of sensor network. To our best knowledge, this study is the first successful application of DRL to AFC in weakly turbulent conditions. It therefore sets a new milestone in progressing toward AFC in strong turbulent flows.
UR - http://www.scopus.com/inward/record.url?scp=85103233373&partnerID=8YFLogxK
U2 - 10.1063/5.0037371
DO - 10.1063/5.0037371
M3 - Journal article
AN - SCOPUS:85103233373
SN - 1070-6631
VL - 33
JO - Physics of Fluids
JF - Physics of Fluids
IS - 3
M1 - 037121
ER -