Abstract
Effective human-robot collaborative assembly (HRCA) demands robots with advanced skill learning and communication capabilities. To address this challenge, this paper proposes a large language model (LLM)-enabled, human demonstration-assisted hybrid robot skill synthesis approach, facilitated via a mixed reality (MR) interface. Our key innovation lies in fine-tuning LLMs to directly translate human language instructions into reward functions, which guide a deep reinforcement learning (DRL) module to autonomously generate robot executable actions. Furthermore, human demonstrations are intuitively tracked via MR, enabling a more adaptive and efficient hybrid skill learning. Finally, the effectiveness of the proposed approach has been demonstrated through multiple HRCA tasks.
Original language | English |
---|---|
Number of pages | 5 |
Journal | CIRP Annals |
DOIs | |
Publication status | E-pub ahead of print - 18 Apr 2025 |
Keywords
- Human robot collaboration
- human-guided robot learning
- manufacturing system
ASJC Scopus subject areas
- Mechanical Engineering
- Industrial and Manufacturing Engineering