TY - GEN
T1 - Towards Mutual-Cognitive Human-Robot Collaboration: A Zero-Shot Visual Reasoning Method
AU - Li, Shufei
AU - Zheng, Pai
AU - Xia, Liqiao
AU - Wang, Xi Vincent
AU - Wang, Lihui
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/8
Y1 - 2023/8
N2 - Human-Robot Collaboration (HRC) is showing the potential of widespread application in today's human-centric smart manufacturing, as prescribed by Industry 5.0. To enable safe and efficient collaboration, numerous visual perception methods have been explored, which allows the robot to perceive surroundings and plan collision-free, reactive manipulations. However, current visual perception approaches can only convey basic information between robots and humans, falling short of semantic knowledge. With this limitation, HRC cannot guarantee smooth operation when confronted with similar yet unseen situations in real-world applications. Therefore, a mutual-cognitive HRC architecture is proposed to plan human and robot operations based on the learning of knowledge representation of onsite situations and task structures. A zero-shot visual reasoning approach is introduced to derive suitable teamwork strategies in the mutual-cognitive HRC from perceived results, including human actions and detected objects. It assigns adaptive robot path planning and knowledge support for humans by incorporating perception components into a knowledge graph, even when dealing with a new but similar HRC task. Lastly, the significance of the proposed mutual-cognitive HRC system is revealed through its evaluation in collaborative disassembly tasks of aging electric vehicle batteries.
AB - Human-Robot Collaboration (HRC) is showing the potential of widespread application in today's human-centric smart manufacturing, as prescribed by Industry 5.0. To enable safe and efficient collaboration, numerous visual perception methods have been explored, which allows the robot to perceive surroundings and plan collision-free, reactive manipulations. However, current visual perception approaches can only convey basic information between robots and humans, falling short of semantic knowledge. With this limitation, HRC cannot guarantee smooth operation when confronted with similar yet unseen situations in real-world applications. Therefore, a mutual-cognitive HRC architecture is proposed to plan human and robot operations based on the learning of knowledge representation of onsite situations and task structures. A zero-shot visual reasoning approach is introduced to derive suitable teamwork strategies in the mutual-cognitive HRC from perceived results, including human actions and detected objects. It assigns adaptive robot path planning and knowledge support for humans by incorporating perception components into a knowledge graph, even when dealing with a new but similar HRC task. Lastly, the significance of the proposed mutual-cognitive HRC system is revealed through its evaluation in collaborative disassembly tasks of aging electric vehicle batteries.
UR - http://www.scopus.com/inward/record.url?scp=85174389409&partnerID=8YFLogxK
U2 - 10.1109/CASE56687.2023.10260599
DO - 10.1109/CASE56687.2023.10260599
M3 - Conference article published in proceeding or book
AN - SCOPUS:85174389409
T3 - IEEE International Conference on Automation Science and Engineering
BT - 2023 IEEE 19th International Conference on Automation Science and Engineering, CASE 2023
PB - IEEE Computer Society
T2 - 19th IEEE International Conference on Automation Science and Engineering, CASE 2023
Y2 - 26 August 2023 through 30 August 2023
ER -