An Incremental Learning Framework to Enhance Teaching by Demonstration Based on Multimodal Sensor Fusion

Jie Li, Junpei Zhong, Jingfeng Yang, Chenguang Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

9 Citations (Scopus)

Abstract

Though a robot can reproduce the demonstration trajectory from a human demonstrator by teleoperation, there is a certain error between the reproduced trajectory and the desired trajectory. To minimize this error, we propose a multimodal incremental learning framework based on a teleoperation strategy that can enable the robot to reproduce the demonstration task accurately. The multimodal demonstration data are collected from two different kinds of sensors in the demonstration phase. Then, the Kalman filter (KF) and dynamic time warping (DTW) algorithms are used to preprocessing the data for the multiple sensor signals. The KF algorithm is mainly used to fuse sensor data of different modalities, and the DTW algorithm is used to align the data in the same timeline. The preprocessed demonstration data are further trained and learned by the incremental learning network and sent to a Baxter robot for reproducing the task demonstrated by the human. Comparative experiments have been performed to verify the effectiveness of the proposed framework.

Original languageEnglish
Article number55
JournalFrontiers in Neurorobotics
Volume14
DOIs
Publication statusPublished - 27 Aug 2020
Externally publishedYes

Keywords

  • data fusion
  • incremental learning network
  • robot learning
  • teaching by demonstration
  • teleoperation

ASJC Scopus subject areas

  • Biomedical Engineering
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'An Incremental Learning Framework to Enhance Teaching by Demonstration Based on Multimodal Sensor Fusion'. Together they form a unique fingerprint.

Cite this