TY - JOUR
T1 - High-Speed Autonomous Drifting with Deep Reinforcement Learning
AU - Cai, Peide
AU - Mei, Xiaodong
AU - Tai, Lei
AU - Sun, Yuxiang
AU - Liu, Ming
N1 - Funding Information:
Manuscript received September 10, 2019; accepted January 2, 2020. Date of publication January 17, 2020; date of current version January 31, 2020. This letter was recommended for publication by Associate Editor H. Ryu and Editor Y. Choi upon evaluation of the reviewers’ comments. This work was supported in part by the National Natural Science Foundation of China (Grant U1713211) and in part by the Research Grant Council of Hong Kong SAR Government, China, under Projects 11210017 and 21202816. (Peide Cai and Xiaodong Mei contributed equally to this work.) (Corresponding author: Ming Liu.) P. Cai, X. Mei, Y. Sun, and M. Liu are with the The Hong Kong University of Science and Technology, Hong Kong, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]).
Publisher Copyright:
© 2016 IEEE.
PY - 2020/4
Y1 - 2020/4
N2 - Drifting is a complicated task for autonomous vehicle control. Most traditional methods in this area are based on motion equations derived by the understanding of vehicle dynamics, which is difficult to be modeled precisely. We propose a robust drift controller without explicit motion equations, which is based on the latest model-free deep reinforcement learning algorithm soft actor-critic. The drift control problem is formulated as a trajectory following task, where the error-based state and reward are designed. After being trained on tracks with different levels of difficulty, our controller is capable of making the vehicle drift through various sharp corners quickly and stably in the unseen map. The proposed controller is further shown to have excellent generalization ability, which can directly handle unseen vehicle types with different physical properties, such as mass, tire friction, etc.
AB - Drifting is a complicated task for autonomous vehicle control. Most traditional methods in this area are based on motion equations derived by the understanding of vehicle dynamics, which is difficult to be modeled precisely. We propose a robust drift controller without explicit motion equations, which is based on the latest model-free deep reinforcement learning algorithm soft actor-critic. The drift control problem is formulated as a trajectory following task, where the error-based state and reward are designed. After being trained on tracks with different levels of difficulty, our controller is capable of making the vehicle drift through various sharp corners quickly and stably in the unseen map. The proposed controller is further shown to have excellent generalization ability, which can directly handle unseen vehicle types with different physical properties, such as mass, tire friction, etc.
KW - Deep learning in robotics and automation
KW - deep reinforcement learning
KW - field robots
KW - motion control
KW - racing car
UR - http://www.scopus.com/inward/record.url?scp=85079664732&partnerID=8YFLogxK
U2 - 10.1109/LRA.2020.2967299
DO - 10.1109/LRA.2020.2967299
M3 - Journal article
AN - SCOPUS:85079664732
SN - 2377-3766
VL - 5
SP - 1247
EP - 1254
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 8961997
ER -