TY - JOUR
T1 - Dynamic scenario-enhanced diverse human motion prediction network for proactive human–robot collaboration in customized assembly tasks
AU - Ding, Pengfei
AU - Zhang, Jie
AU - Zheng, Pai
AU - Zhang, Peng
AU - Fei, Bo
AU - Xu, Ziqi
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
PY - 2024
Y1 - 2024
N2 - Human motion prediction is crucial for facilitating human–robot collaboration in customized assembly tasks. However, existing research primarily focuses on predicting limited human motions using static global information, which fails to address the highly stochastic nature of customized assembly operations in a given region. To address this, we propose a dynamic scenario-enhanced diverse human motion prediction network that extracts dynamic collaborative features to predict highly stochastic customized assembly operations. In this paper, we present a multi-level feature adaptation network that generates information for dynamically manipulating objects. This is accomplished by extracting multi-attribute features at different levels, including multi-channel gaze tracking, multi-scale object affordance detection, and multi-modal object’s 6 degree-of-freedom pose estimation. Notably, we employ gaze tracking to locate the collaborative space accurately. Furthermore, we introduce a multi-step feedback-refined diffusion sampling network specifically designed for predicting highly stochastic customized assembly operations. This network refines the outcomes of our proposed multi-weight diffusion sampling strategy to better align with the target distribution. Additionally, we develop a feedback regulatory mechanism that incorporates ground truth information in each prediction step to ensure the reliability of the results. Finally, the effectiveness of the proposed method was demonstrated through comparative experiments and validation of assembly tasks in a laboratory environment.
AB - Human motion prediction is crucial for facilitating human–robot collaboration in customized assembly tasks. However, existing research primarily focuses on predicting limited human motions using static global information, which fails to address the highly stochastic nature of customized assembly operations in a given region. To address this, we propose a dynamic scenario-enhanced diverse human motion prediction network that extracts dynamic collaborative features to predict highly stochastic customized assembly operations. In this paper, we present a multi-level feature adaptation network that generates information for dynamically manipulating objects. This is accomplished by extracting multi-attribute features at different levels, including multi-channel gaze tracking, multi-scale object affordance detection, and multi-modal object’s 6 degree-of-freedom pose estimation. Notably, we employ gaze tracking to locate the collaborative space accurately. Furthermore, we introduce a multi-step feedback-refined diffusion sampling network specifically designed for predicting highly stochastic customized assembly operations. This network refines the outcomes of our proposed multi-weight diffusion sampling strategy to better align with the target distribution. Additionally, we develop a feedback regulatory mechanism that incorporates ground truth information in each prediction step to ensure the reliability of the results. Finally, the effectiveness of the proposed method was demonstrated through comparative experiments and validation of assembly tasks in a laboratory environment.
KW - Customized assembly
KW - Diverse human motion prediction
KW - Dynamic collaborative information
KW - Human–robot collaboration
UR - http://www.scopus.com/inward/record.url?scp=85199273871&partnerID=8YFLogxK
U2 - 10.1007/s10845-024-02462-8
DO - 10.1007/s10845-024-02462-8
M3 - Journal article
AN - SCOPUS:85199273871
SN - 0956-5515
JO - Journal of Intelligent Manufacturing
JF - Journal of Intelligent Manufacturing
ER -