A digital twin-driven dynamic path planning approach for multiple automatic guided vehicles based on deep reinforcement learning

Qiangwei Bao, Pai Zheng, Sheng Dai

Research output: Journal article publicationJournal articleAcademic researchpeer-review

2 Citations (Scopus)

Abstract

With the increasing demand for customization, the tendency of mechanical manufacturing has gradually shifted to flexible and mixed-line production, which brings new challenges to the existing scheduling pattern. As an indispensable part, logistics is responsible for establishing connections among various production equipment and processes. Meanwhile, the promotion of digital twin theory introduces an application schema for the logistics system. However, there is still a deficiency in the real-time dispatching and path planning of logistics equipment due to the uncontrollability of algorithm efficiency for complex scenes. To fill this gap, a digital twin-driven dynamic path planning approach for multiple automatic guided vehicles (AGVs) is proposed. Firstly, the AGVs are virtualized as the major component of logistics systems, while the ontology expression of logistics tasks is consistently accomplished as well. Secondly, the digital twin-driven application framework of multi-AGV dispatching is established. Moreover, a dynamic path planning method for AGVs relying on deep reinforcement learning is implemented. A segmented path planning method is illustrated considering potential route conflicts, which is regarded as the key contribution of the presented research. At last, a case study is illustrated to show the entire process of multiple vehicle path planning and conflict resolution.

Keywords

  • automatic guided vehicle
  • deep reinforcement learning
  • digital twin
  • path planning
  • Workshop logistics

ASJC Scopus subject areas

  • Mechanical Engineering
  • Industrial and Manufacturing Engineering

Fingerprint

Dive into the research topics of 'A digital twin-driven dynamic path planning approach for multiple automatic guided vehicles based on deep reinforcement learning'. Together they form a unique fingerprint.

Cite this