Abstract
High precision grasp pose detection is an essential but challenging task in robotic manipulation. Most of the current methods for grasp detection either highly rely on the geometric information of the objects or generate feasible grasp poses within restricted configurations. In this letter, a grasp pose detection framework is proposed that generates a rich set of 6-DoF grasp poses with high precision. Firstly, a novel feature fusion module with multi-radius cylinder sampling is designed to enhance local geometric representation. Secondly, an optimized grasp operation head is developed to further estimate grasp parameters. Finally, a grasp pose propagation algorithm is proposed, which effectively extends grasp poses from a restricted configuration to a larger configuration. Experiments on a large-scale benchmark, GraspNet-1Billion, show that the proposed method outperforms existing methods (+8.61 AP). The real-world experiments further demonstrate the effectiveness of the proposed method in cluttered environments.
Original language | English |
---|---|
Article number | 10473147 |
Pages (from-to) | 4407-4414 |
Number of pages | 8 |
Journal | IEEE Robotics and Automation Letters |
Volume | 9 |
Issue number | 5 |
DOIs | |
Publication status | Published - 1 May 2024 |
Keywords
- computer vision for automation
- Deep learning in grasping and manipulation
- grasp pose propagation
- local geometric representation
ASJC Scopus subject areas
- Control and Systems Engineering
- Biomedical Engineering
- Human-Computer Interaction
- Mechanical Engineering
- Computer Vision and Pattern Recognition
- Computer Science Applications
- Control and Optimization
- Artificial Intelligence