Abstract
Learning a task such as pushing something, where the constraints of both position and force have to be satisfied, is usually difficult for a collaborative robot. In this work, we propose a multimodal teaching-by-demonstration system which can enable the robot to perform this kind of tasks. The basic idea is to transfer the adaptation of multi-modal information from a human tutor to the robot by taking account of multiple sensor signals (i.e., motion trajectories, stiffness, and force profiles). The human tutor's stiffness is estimated based on the limb surface electromyography (EMG) signals obtained from the demonstration phase. The force profiles in Cartesian space are collected from a force/torque sensor mounted between the robot endpoint and the tool. Subsequently, the hidden semi-Markov model (HSMM) is used to encode the multiple signals in a unified manner. The correlations between position and the other three control variables (i.e., velocity, stiffness and force) are encoded with separate HSMM models. Based on the estimated parameters of the HSMM model, the Gaussian mixture regression (GMR) is then utilized to generate the expected control variables. The learned variables are further mapped into an impedance controller in the joint space through inverse kinematics for the reproduction of the task. Comparative tests have been conducted to verify the effectiveness of our approach on a Baxter robot.
Original language | English |
---|---|
Article number | 8856213 |
Pages (from-to) | 145604-145613 |
Number of pages | 10 |
Journal | IEEE Access |
Volume | 7 |
DOIs | |
Publication status | Published - 3 Oct 2019 |
Externally published | Yes |
Keywords
- multimodal learning
- physical human-robot interaction
- Robotic control
- stiffness and force adaptation
ASJC Scopus subject areas
- Computer Science(all)
- Materials Science(all)
- Engineering(all)