An LLM-enabled human demonstration-assisted hybrid robot skill synthesis approach for human-robot collaborative assembly

Yue Yin, Ke Wan, Chengxi Li, Pai Zheng (Corresponding Author)

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)

Abstract

Effective human-robot collaborative assembly (HRCA) demands robots with advanced skill learning and communication capabilities. To address this challenge, this paper proposes a large language model (LLM)-enabled, human demonstration-assisted hybrid robot skill synthesis approach, facilitated via a mixed reality (MR) interface. Our key innovation lies in fine-tuning LLMs to directly translate human language instructions into reward functions, which guide a deep reinforcement learning (DRL) module to autonomously generate robot executable actions. Furthermore, human demonstrations are intuitively tracked via MR, enabling a more adaptive and efficient hybrid skill learning. Finally, the effectiveness of the proposed approach has been demonstrated through multiple HRCA tasks.

Original languageEnglish
Number of pages5
JournalCIRP Annals
DOIs
Publication statusE-pub ahead of print - 18 Apr 2025

Keywords

  • Human robot collaboration
  • human-guided robot learning
  • manufacturing system

ASJC Scopus subject areas

  • Mechanical Engineering
  • Industrial and Manufacturing Engineering

Fingerprint

Dive into the research topics of 'An LLM-enabled human demonstration-assisted hybrid robot skill synthesis approach for human-robot collaborative assembly'. Together they form a unique fingerprint.

Cite this