TY - JOUR
T1 - Dynamic Scene Graph for Mutual-Cognition Generation in Proactive Human-Robot Collaboration
AU - Li, Shufei
AU - Zheng, Pai
AU - Wang, Zuoxu
AU - Fan, Junming
AU - Wang, Lihui
N1 - Funding Information:
This research work was partially supported by the grants from the National Natural Science Foundation of China (No. 52005424), and Research Committee of The Hong Kong Polytechnic University under Research Student Attachment Programme 2021/22 and Collaborative Departmental General Research Fund (G-UAMS) from the Hong Kong Polytechnic University, Hong Kong SAR, China.
Publisher Copyright:
© 2022 The Authors. Published by Elsevier B.V.
PY - 2022/6
Y1 - 2022/6
N2 - Human-robot collaboration (HRC) plays a crucial role in agile, flexible, and human-centric manufacturing towards the mass personalization transition. Nevertheless, in today's HRC tasks, either humans or robots need to follow the partners' commands and instructions along collaborative activities progressing, instead of proactive, mutual engagement. The non-semantic perception of HRC scenarios impedes mutually needed, proactive planning and high-cognitive capabilities in existing HRC systems. To overcome the bottleneck, this research explores a dynamic scene graph-based method for mutual-cognition generation in Proactive HRC applications. Firstly, a spatial-attention object detector is utilized to dynamically perceive objects in industrial settings. Secondly, a linking prediction module is leveraged to construct HRC scene graphs. An attentional graph convolutional network (GCN) is utilized to capture relations between industrial parts, human operators, and robot operations and reason structural connections of human-robot collaborative processing as graph embedding, which links to mutual planners for human operation supports and robot proactive instructions. Lastly, the Proactive HRC implementation is demonstrated on disassembly tasks of aging electronic vehicle batteries (EVBs) and evaluate its mutual-cognition capabilities.
AB - Human-robot collaboration (HRC) plays a crucial role in agile, flexible, and human-centric manufacturing towards the mass personalization transition. Nevertheless, in today's HRC tasks, either humans or robots need to follow the partners' commands and instructions along collaborative activities progressing, instead of proactive, mutual engagement. The non-semantic perception of HRC scenarios impedes mutually needed, proactive planning and high-cognitive capabilities in existing HRC systems. To overcome the bottleneck, this research explores a dynamic scene graph-based method for mutual-cognition generation in Proactive HRC applications. Firstly, a spatial-attention object detector is utilized to dynamically perceive objects in industrial settings. Secondly, a linking prediction module is leveraged to construct HRC scene graphs. An attentional graph convolutional network (GCN) is utilized to capture relations between industrial parts, human operators, and robot operations and reason structural connections of human-robot collaborative processing as graph embedding, which links to mutual planners for human operation supports and robot proactive instructions. Lastly, the Proactive HRC implementation is demonstrated on disassembly tasks of aging electronic vehicle batteries (EVBs) and evaluate its mutual-cognition capabilities.
KW - Cognitive computing
KW - human-centric manufacturing
KW - human-robot collaboration
KW - scene graph
UR - http://www.scopus.com/inward/record.url?scp=85132249012&partnerID=8YFLogxK
U2 - 10.1016/j.procir.2022.05.089
DO - 10.1016/j.procir.2022.05.089
M3 - Conference article
AN - SCOPUS:85132249012
SN - 2212-8271
VL - 107
SP - 943
EP - 948
JO - Procedia CIRP
JF - Procedia CIRP
T2 - 55th CIRP Conference on Manufacturing Systems, CIRP CMS 2022
Y2 - 29 June 2022 through 1 July 2022
ER -