Toward Enhancing Room Layout Estimation by Feature Pyramid Networks

Aopeng Wang, Shiting Wen, Yunjun Gao, Qing Li, Ke Deng, Chaoyi Pang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)

Abstract

As a fundamental part of indoor scene understanding, the research of indoor room layout estimation has attracted much attention recently. The task is to predict the structure of a room from a single image. In this paper, we illustrate that this task can be well solved even without sophisticated post-processing program, by adopting Feature Pyramid Networks (FPN) to solve this problem with adaptive changes. The proposed model employs two strategies to deliver quality output. First, it can predicts the coarse positions of key points correctly by preserving the order of these key points in the data augmentation stage. Then the coordinate of each corner point is refined by moving each corner point to its nearest image boundary as output. Our method has demonstrated great performance on the benchmark LSUN dataset on both processing efficiency and accuracy. Compared with the state-of-the-art end-to-end method, our method is two times faster at processing speed (32 ms) than its speed (86 ms), with 0.71% lower key point error and 0.2% higher pixel error respectively. Besides, the advanced two-step method is only 0.02% better than our result on key point error. Both the high efficiency and accuracy make our method a good choice for some real-time room layout estimation tasks.

Original languageEnglish
Pages (from-to)213-224
Number of pages12
JournalData Science and Engineering
Volume7
Issue number3
DOIs
Publication statusPublished - Sept 2022

Keywords

  • Feature Pyramid Network
  • Layout estimation
  • Scene understanding

ASJC Scopus subject areas

  • Computational Mechanics
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Toward Enhancing Room Layout Estimation by Feature Pyramid Networks'. Together they form a unique fingerprint.

Cite this