Abstract
Monocular 3D object detection aims to identify objects’ 3D positions and poses with low hardware and computation power costs, which is crucial for scenarios like autonomous driving and deep space exploration. While the corresponding research has developed rapidly with the integration of transformer structures, features in 3D are still simply transformed from visual features, resulting in a mismatch between the detection results and the reality. Moreover, most existing methods suffer from the slow convergence speed. To address these issues in monocular 3D object detection, a framework, named geometry-guided monocular detection with transformer (GG-Mono), is proposed. It consists of three main components: 1) the mix-feature encoder module that incorporates pretrained depth estimation models to enhance convergence speed and accuracy; 2) the geometry encoding module that supplements hybrid encoding with global geometry data; 3) the GG decoder module that utilizes geometry queries to guide the decoding process. Extensive experiments show that the model outperforms all existing methods in terms of detection accuracy, and achieves 26.88% and 30.65% in average precision of 3D detection box (AP3D) on the validation dataset and test dataset, respectively, which is 1.88% and 1.81% higher than the baseline, and significantly improved the convergence speed (from 184 to 90 epochs). These facts prove the advantages of the proposed method for monocular 3D object detection.
| Original language | English |
|---|---|
| Article number | 2500003 |
| Number of pages | 16 |
| Journal | Advanced Intelligent Systems |
| DOIs | |
| Publication status | E-pub ahead of print - 11 May 2025 |
Keywords
- 3D object detection
- deep learning
- single-view geometry
- transformer
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Vision and Pattern Recognition
- Human-Computer Interaction
- Mechanical Engineering
- Control and Systems Engineering
- Electrical and Electronic Engineering
- Materials Science (miscellaneous)