Abstract
Human-Robot Collaboration (HRC) has emerged as a pivot in contemporary human-centric smart manufacturing scenarios. However, the fulfilment of HRC tasks in unstructured scenes brings many challenges to be overcome. In this work, mixed reality head-mounted display is modelled as an effective data collection, communication, and state representation interface/tool for HRC task settings. By integrating vision-language cues with large language model, a vision-language-guided HRC task planning approach is firstly proposed. Then, a deep reinforcement learning-enabled mobile manipulator motion control policy is generated to fulfil HRC task primitives. Its feasibility is demonstrated in several HRC unstructured manufacturing tasks with comparative results.
Original language | English |
---|---|
Pages (from-to) | 341-344 |
Number of pages | 4 |
Journal | CIRP Annals |
Volume | 73 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jul 2024 |
Keywords
- Human-guided robot learning
- Human-robot collaboration
- Manufacturing system
ASJC Scopus subject areas
- Mechanical Engineering
- Industrial and Manufacturing Engineering