TY - JOUR
T1 - Robot teaching by teleoperation based on visual interaction and extreme learning machine
AU - Xu, Yang
AU - Yang, Chenguang
AU - Zhong, Junpei
AU - Wang, Ning
AU - Zhao, Lijun
N1 - Funding Information:
This work was partially supported by National Nature Science Foundation (NSFC) under Grant 61473120, Guangdong Provincial Natural Science Foundation 2014A030313266 and International Science and Technology Collaboration Grant 2015A050502017, Science and Technology Planning Project of Guangzhou 201607010006, State Key Laboratory of Robotics and System (HIT) Grant SKLRS-2017-KF-13, and the Fundamental Research Funds for the Central Universities 2017ZD057.
Publisher Copyright:
© 2017 Elsevier B.V.
PY - 2018/1/31
Y1 - 2018/1/31
N2 - Compared with traditional robot teaching methods, robots can learn various human-like skills in a more efficient and natural manner through teleoperation. In this paper, we propose a teleoperation method based on human-robot interaction (HRI), which mainly uses visual information. With only one teleoperation, the robot can reproduce a trajectory. There is a certain error between this trajectory and the optimal trajectory due to the cause of the human demonstrator or the robot. So we use an extreme learning machine (ELM) based algorithm to transfer the demonstrator's motions to the robot. To verify the method, we use a Microsoft KinectV2 to capture the human body motion and the hand state, according to which a Baxter robot in Virtual Robot Experimentation Platform (V-REP) will be controlled by the command. Through learning and training by the ELM, the robot in V-REP can complete a certain task autonomously and the robot in reality can reproduce this trajectory well. The experimental results show that the developed method has achieved satisfactory performance.
AB - Compared with traditional robot teaching methods, robots can learn various human-like skills in a more efficient and natural manner through teleoperation. In this paper, we propose a teleoperation method based on human-robot interaction (HRI), which mainly uses visual information. With only one teleoperation, the robot can reproduce a trajectory. There is a certain error between this trajectory and the optimal trajectory due to the cause of the human demonstrator or the robot. So we use an extreme learning machine (ELM) based algorithm to transfer the demonstrator's motions to the robot. To verify the method, we use a Microsoft KinectV2 to capture the human body motion and the hand state, according to which a Baxter robot in Virtual Robot Experimentation Platform (V-REP) will be controlled by the command. Through learning and training by the ELM, the robot in V-REP can complete a certain task autonomously and the robot in reality can reproduce this trajectory well. The experimental results show that the developed method has achieved satisfactory performance.
KW - Extreme learning machine (ELM)
KW - Human-robot interaction (HRI)
KW - Robot teaching
KW - Teleoperation
UR - http://www.scopus.com/inward/record.url?scp=85036549240&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2017.10.034
DO - 10.1016/j.neucom.2017.10.034
M3 - Journal article
AN - SCOPUS:85036549240
SN - 0925-2312
VL - 275
SP - 2093
EP - 2103
JO - Neurocomputing
JF - Neurocomputing
ER -