A vision-language-guided and deep reinforcement learning-enabled approach for unstructured human-robot collaborative manufacturing task fulfilment

Pai Zheng, Chengxi Li, Junming Fan, Lihui Wang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)

Abstract

Human-Robot Collaboration (HRC) has emerged as a pivot in contemporary human-centric smart manufacturing scenarios. However, the fulfilment of HRC tasks in unstructured scenes brings many challenges to be overcome. In this work, mixed reality head-mounted display is modelled as an effective data collection, communication, and state representation interface/tool for HRC task settings. By integrating vision-language cues with large language model, a vision-language-guided HRC task planning approach is firstly proposed. Then, a deep reinforcement learning-enabled mobile manipulator motion control policy is generated to fulfil HRC task primitives. Its feasibility is demonstrated in several HRC unstructured manufacturing tasks with comparative results.

Original languageEnglish
Pages (from-to)341-344
Number of pages4
JournalCIRP Annals
Volume73
Issue number1
DOIs
Publication statusPublished - Jul 2024

Keywords

  • Human-guided robot learning
  • Human-robot collaboration
  • Manufacturing system

ASJC Scopus subject areas

  • Mechanical Engineering
  • Industrial and Manufacturing Engineering

Fingerprint

Dive into the research topics of 'A vision-language-guided and deep reinforcement learning-enabled approach for unstructured human-robot collaborative manufacturing task fulfilment'. Together they form a unique fingerprint.

Cite this