Data efficient reinforcement learning and adaptive optimal perimeter control of network traffic dynamics

C. Chen, Y. P. Huang, W. H.K. Lam, T. L. Pan, S. C. Hsu, A. Sumalee, R. X. Zhong

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

Existing data-driven and feedback traffic control strategies do not consider the heterogeneity of real-time data measurements. Besides, traditional reinforcement learning (RL) methods for traffic control usually converge slowly for lacking data efficiency. Moreover, conventional optimal perimeter control schemes require exact knowledge of the system dynamics and thus they would be fragile to endogenous uncertainties. To handle these challenges, this work proposes an integral reinforcement learning (IRL) based approach to learning the macroscopic traffic dynamics for adaptive optimal perimeter control. This work makes the following primary contributions to the transportation literature: (a) A continuous-time control is developed with discrete gain updates to adapt to the discrete-time sensor data. Different from the conventional RL approaches, the reinforcement interval of the proposed IRL method can be varying with respect to the real-time resolution of data measurements. Approximate optimization methods are carried out to address the curse of dimensionality of the optimal control problem with consideration on the resolution of data measurement. (b) To reduce the sampling complexity and use the available data more efficiently, the experience replay (ER) technique is introduced to the IRL algorithm. (c) The proposed method relaxes the requirement on model calibration in a “model-free” manner that enables robustness against modeling uncertainty and enhances the real-time performance via a data-driven RL algorithm. (d) The convergence of the IRL based algorithms and the stability of the controlled traffic dynamics are proven via the Lyapunov theory. The optimal control law is parameterized and then approximated by neural networks (NN), which moderates the computational complexity. Both state and input constraints are considered while no model linearization is required. Numerical examples and simulation experiments are presented to verify the effectiveness and efficiency of the proposed method.

Original languageEnglish
Article number103759
JournalTransportation Research Part C: Emerging Technologies
Volume142
DOIs
Publication statusPublished - Sep 2022

Keywords

  • Adaptive optimal perimeter control
  • Closed-loop stability
  • Experience replay
  • Heterogeneous data resolution
  • Integral reinforcement learning
  • Macroscopic fundamental diagram

ASJC Scopus subject areas

  • Civil and Structural Engineering
  • Automotive Engineering
  • Transportation
  • Management Science and Operations Research

Fingerprint

Dive into the research topics of 'Data efficient reinforcement learning and adaptive optimal perimeter control of network traffic dynamics'. Together they form a unique fingerprint.

Cite this