TY - GEN
T1 - Digital twin-enabled reinforcement learning for end-to-end autonomous driving
AU - Wu, Jingda
AU - Huang, Zhiyu
AU - Hang, Peng
AU - Huang, Chao
AU - De Boer, Niels
AU - Lv, Chen
N1 - Funding Information:
This work was supported in part by A*STAR National Robotics Programme (No. SERC 1922500046), and in part by A*STAR AME Young Individual Research Grant (No. A2084c0156).
Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/15
Y1 - 2021/7/15
N2 - Digital twin maps the physical plant to a real-time digital representation and facilities product design and decision-making processes. In this paper, we propose a novel digital twin-enabled reinforcement learning approach and apply it to an autonomous driving scenario. To further improve the data efficiency of reinforcement learning, which often requires a large amount of agent-environment interactions during the training process, we propose a digital-twin environment model that can predict the transition dynamics of the physical driving scene. Moreover, we propose a rollout prediction-compatible reinforcement learning framework, which is able to further improve the training efficiency. The proposed framework is validated in an autonomous driving task with a focus on lateral motion control. The simulation results illustrate that our method could significantly speed up the learning process and the resulting driving policy could achieve better performance, compared to the conventional reinforcement learning approach, which demonstrates the feasibility and effectiveness of the proposed digital-twin-enabled reinforcement learning method.
AB - Digital twin maps the physical plant to a real-time digital representation and facilities product design and decision-making processes. In this paper, we propose a novel digital twin-enabled reinforcement learning approach and apply it to an autonomous driving scenario. To further improve the data efficiency of reinforcement learning, which often requires a large amount of agent-environment interactions during the training process, we propose a digital-twin environment model that can predict the transition dynamics of the physical driving scene. Moreover, we propose a rollout prediction-compatible reinforcement learning framework, which is able to further improve the training efficiency. The proposed framework is validated in an autonomous driving task with a focus on lateral motion control. The simulation results illustrate that our method could significantly speed up the learning process and the resulting driving policy could achieve better performance, compared to the conventional reinforcement learning approach, which demonstrates the feasibility and effectiveness of the proposed digital-twin-enabled reinforcement learning method.
KW - Autonomous driving
KW - End-to-end control
KW - Environment digital twin model
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85116124487&partnerID=8YFLogxK
U2 - 10.1109/DTPI52967.2021.9540179
DO - 10.1109/DTPI52967.2021.9540179
M3 - Conference article published in proceeding or book
AN - SCOPUS:85116124487
T3 - Proceedings 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence, DTPI 2021
SP - 62
EP - 65
BT - Proceedings 2021 IEEE 1st International Conference on Digital Twins and Parallel Intelligence, DTPI 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 1st IEEE International Conference on Digital Twins and Parallel Intelligence, DTPI 2021
Y2 - 15 July 2021 through 15 August 2021
ER -