TY - JOUR
T1 - Mutual-cognition for proactive human–robot collaboration: A mixed reality-enabled visual reasoning-based method
AU - Li, Shufei
AU - You, Yingchao
AU - Zheng, Pai
AU - Wang, Xi Vincent
AU - Wang, Lihui
N1 - Publisher Copyright:
© 2024 The Author(s). Published with license by Taylor & Francis Group, LLC.
PY - 2024/3/11
Y1 - 2024/3/11
N2 - Human-Robot Collaboration (HRC) is key to achieving the flexible automation required by the mass personalization trend, especially towards human-centric intelligent manufacturing. Nevertheless, existing HRC systems suffer from poor task understanding and poor ergonomic satisfaction, which impede empathetic teamwork skills in task execution. To overcome the bottleneck, a Mixed Reality (MR) and visual reasoning-based method is proposed in this research, providing mutual-cognitive task assignment for human and robotic agents’ operations. Firstly, an MR-enabled mutual-cognitive HRC architecture is proposed, with the characteristic of monitoring Digital Twins states, reasoning co-working strategies, and providing cognitive services. Secondly, a visual reasoning approach is introduced, which learns scene interpretation from the visual perception of each agent’s actions and environmental changes to make task planning strategies satisfying human–robot operation needs. Lastly, a safe, ergonomic, and proactive robot motion planning algorithm is proposed to let a robot execute generated co-working strategies, while a human operator is supported with intuitive task operation guidance in the MR environment, achieving empathetic collaboration. Through a demonstration of a disassembly task of aging Electric Vehicle Batteries, the experimental result facilitates cognitive intelligence in Proactive HRC for flexible automation.
AB - Human-Robot Collaboration (HRC) is key to achieving the flexible automation required by the mass personalization trend, especially towards human-centric intelligent manufacturing. Nevertheless, existing HRC systems suffer from poor task understanding and poor ergonomic satisfaction, which impede empathetic teamwork skills in task execution. To overcome the bottleneck, a Mixed Reality (MR) and visual reasoning-based method is proposed in this research, providing mutual-cognitive task assignment for human and robotic agents’ operations. Firstly, an MR-enabled mutual-cognitive HRC architecture is proposed, with the characteristic of monitoring Digital Twins states, reasoning co-working strategies, and providing cognitive services. Secondly, a visual reasoning approach is introduced, which learns scene interpretation from the visual perception of each agent’s actions and environmental changes to make task planning strategies satisfying human–robot operation needs. Lastly, a safe, ergonomic, and proactive robot motion planning algorithm is proposed to let a robot execute generated co-working strategies, while a human operator is supported with intuitive task operation guidance in the MR environment, achieving empathetic collaboration. Through a demonstration of a disassembly task of aging Electric Vehicle Batteries, the experimental result facilitates cognitive intelligence in Proactive HRC for flexible automation.
KW - ergonomic robot control
KW - human-centric manufacturing
KW - Human-robot collaboration
KW - mixed reality
KW - visual reasoning
UR - http://www.scopus.com/inward/record.url?scp=85187471019&partnerID=8YFLogxK
U2 - 10.1080/24725854.2024.2313647
DO - 10.1080/24725854.2024.2313647
M3 - Journal article
AN - SCOPUS:85187471019
SN - 2472-5854
VL - 56
SP - 1099
EP - 1111
JO - IISE Transactions
JF - IISE Transactions
IS - 10
ER -