Constructing an efficient and adaptive learning model for 3D object generation

Jiwei Hu, Wupeng Deng, Quan Liu, Kin Man Lam, Ping Lou

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

Studying representation learning and generative modelling has been at the core of the 3D learning domain. By leveraging the generative adversarial networks and convolutional neural networks for point-cloud representations, we propose a novel framework, which can directly generate 3D objects represented by point clouds. The novelties of the proposed method are threefold. First, the generative adversarial networks are applied to 3D object generation in the point-cloud space, where the model learns object representation from point clouds independently. In this work, we propose a 3D spatial transformer network, and integrate it into a generation model, whose ability for extracting and reconstructing features for 3D objects can be improved. Second, a point-wise approach is developed to reduce the computational complexity of the proposed network. Third, an evaluation system is proposed to measure the performance of our model by employing various categories and methods, and the error, considered as the difference between synthesized objects and raw objects are quantitatively compared, is less than 2.8%. Extensive experiments on benchmark dataset show that this method has a strong ability to generate 3D objects in the point-cloud space, and the synthesized objects have slight differences with man-made 3D objects.

Original languageEnglish
Pages (from-to)1745-1758
Number of pages14
JournalIET Image Processing
Volume15
Issue number8
DOIs
Publication statusPublished - Jun 2021

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Cite this