Abstract
Garment folding is a ubiquitous domestic task that is difficult to automate due to the highly deformable nature of fabrics. In this article, we propose a novel method of learning from demonstrations that enables robots to autonomously manipulate an assistive tool to fold garments. In contrast to traditional methods (that rely on low-level pixel features), our proposed solution uses a dense visual descriptor to encode the demonstration into a high-level <italic>hand-object graph</italic> (HoG) that allows to efficiently represent the interactions between the manipulated tool and robots. With that, we leverage graph neural network to autonomously learn the forward dynamics model from HoGs, then, given only a single demonstration, the imitation policy is optimized with a model predictive controller to accomplish the folding task. To validate the proposed approach, we conducted a detailed experimental study on a robotic platform instrumented with vision sensors and a custom-made end-effector that interacts with the folding board.
Original language | English |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Industrial Informatics |
DOIs | |
Publication status | Accepted/In press - 2023 |
Keywords
- Cloth folding
- Clothing
- graph dynamics model
- hand-object graph (HoG)
- imitation learning (IL)
- Manipulator dynamics
- Predictive models
- Robots
- Task analysis
- tool manipulation
- Trajectory
- Visualization
ASJC Scopus subject areas
- Control and Systems Engineering
- Information Systems
- Computer Science Applications
- Electrical and Electronic Engineering