TY - JOUR
T1 - End-to-end autonomous underwater vehicle path following control method based on improved soft actor–critic for deep space exploration
AU - Dong, Na
AU - Liu, Shoufu
AU - Ip, Andrew Wai Hung
AU - Yung, Kai Leung
AU - Gao, Zhongke
AU - Juan, Rongshun
AU - Wang, Yanhui
N1 - Publisher Copyright:
© 2025 Elsevier Inc.
PY - 2025/5
Y1 - 2025/5
N2 - The vast extraterrestrial ocean is becoming a hotspot for deep space exploration of life in the future. Considering autonomous underwater vehicle (AUV) has a larger range of activities and greater flexibility, it plays an important role in extraterrestrial ocean research. To solve the problems in path following tasks of AUV, such as high training cost and poor exploration ability, an end-to-end AUV path following control method based on an improved soft actor–critic (SAC) algorithm is designed in this paper, leveraging the advancements in deep reinforcement learning (DRL) to enhance performance and efficiency. It uses sensor information to understand the environment and its state to output the policy to complete the adaptive action. Policies that consider long-term effects can be learned through continuous interaction with the environment, which is helpful in improving adaptability and enhancing the robustness of AUV control. A non-policy sampling method is designed to improve the utilization efficiency of experience transitions in the replay buffer, accelerate convergence, and enhance its stability. A reward function on the current position and heading angle of AUV is designed to avoid the situation of sparse reward leading to slow learning or ineffective learning of agents. In the meantime, we use the continuous action space instead of the discrete action space to make the real-time control of the AUV more accurate. Finally, it is tested on the gazebo simulation platform, and the results confirm that reinforcement learning is effective in AUV control, and the method proposed in this paper has faster and better following performance than traditional reinforcement learning methods.
AB - The vast extraterrestrial ocean is becoming a hotspot for deep space exploration of life in the future. Considering autonomous underwater vehicle (AUV) has a larger range of activities and greater flexibility, it plays an important role in extraterrestrial ocean research. To solve the problems in path following tasks of AUV, such as high training cost and poor exploration ability, an end-to-end AUV path following control method based on an improved soft actor–critic (SAC) algorithm is designed in this paper, leveraging the advancements in deep reinforcement learning (DRL) to enhance performance and efficiency. It uses sensor information to understand the environment and its state to output the policy to complete the adaptive action. Policies that consider long-term effects can be learned through continuous interaction with the environment, which is helpful in improving adaptability and enhancing the robustness of AUV control. A non-policy sampling method is designed to improve the utilization efficiency of experience transitions in the replay buffer, accelerate convergence, and enhance its stability. A reward function on the current position and heading angle of AUV is designed to avoid the situation of sparse reward leading to slow learning or ineffective learning of agents. In the meantime, we use the continuous action space instead of the discrete action space to make the real-time control of the AUV more accurate. Finally, it is tested on the gazebo simulation platform, and the results confirm that reinforcement learning is effective in AUV control, and the method proposed in this paper has faster and better following performance than traditional reinforcement learning methods.
KW - Deep space exploration
KW - Autonomous underwater vehicle
KW - Deep reinforcement learning (DRL)
KW - Path following
KW - Intelligent control
UR - http://www.scopus.com/inward/record.url?scp=85218277117&partnerID=8YFLogxK
U2 - 10.1016/j.jii.2025.100792
DO - 10.1016/j.jii.2025.100792
M3 - Journal article
SN - 2452-414X
VL - 45
JO - Journal of Industrial Information Integration
JF - Journal of Industrial Information Integration
M1 - 100792
ER -