Applying deep reinforcement learning to active flow control in weakly turbulent conditions

Feng Ren, Jean Rabault, Hui Tang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

3 Citations (Scopus)

Abstract

Machine learning has recently become a promising technique in fluid mechanics, especially for active flow control (AFC) applications. A recent work [Rabault et al., J. Fluid Mech. 865, 281-302 (2019)] has demonstrated the feasibility and effectiveness of deep reinforcement learning (DRL) in performing AFC over a circular cylinder at Re = 100, i.e., in the laminar flow regime. As a follow-up study, we investigate the same AFC problem at an intermediate Reynolds number, i.e., Re = 1000, where the weak turbulence in the flow poses great challenges to the control. The results show that the DRL agent can still find effective control strategies, but requires much more episodes in the learning. A remarkable drag reduction of around 30% is achieved, which is accompanied by elongation of the recirculation bubble and reduction of turbulent fluctuations in the cylinder wake. Furthermore, we also perform a sensitivity analysis on the learnt control strategies to explore the optimal layout of sensor network. To our best knowledge, this study is the first successful application of DRL to AFC in weakly turbulent conditions. It therefore sets a new milestone in progressing toward AFC in strong turbulent flows.

Original languageEnglish
Article number037121
JournalPhysics of Fluids
Volume33
Issue number3
DOIs
Publication statusPublished - 1 Mar 2021

ASJC Scopus subject areas

  • Computational Mechanics
  • Condensed Matter Physics
  • Mechanics of Materials
  • Mechanical Engineering
  • Fluid Flow and Transfer Processes

Cite this