Abstract
Effective human-robot collaborative assembly (HRCA) demands robots with advanced skill learning and communication capabilities. To address this challenge, this paper proposes a large language model (LLM)-enabled, human demonstration-assisted hybrid robot skill synthesis approach, facilitated via a mixed reality (MR) interface. Our key innovation lies in fine-tuning LLMs to directly translate human language instructions into reward functions, which guide a deep reinforcement learning (DRL) module to autonomously generate robot executable actions. Furthermore, human demonstrations are intuitively tracked via MR, enabling a more adaptive and efficient hybrid skill learning. Finally, the effectiveness of the proposed approach has been demonstrated through multiple HRCA tasks.
| Original language | English |
|---|---|
| Pages (from-to) | 1-5 |
| Number of pages | 5 |
| Journal | CIRP Annals |
| Volume | 74 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 18 Apr 2025 |
Keywords
- Human robot collaboration
- human-guided robot learning
- manufacturing system
ASJC Scopus subject areas
- Mechanical Engineering
- Industrial and Manufacturing Engineering