Embedding 3D models in offline physical environments

Egemen Ertugrul, Han Zhang, Fang Zhu, Ping Lu, Ping Li, Bin Sheng, Enhua Wu

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

This article introduces a novel approach for embedding 3D models in offline physical environments using quick response (QR) codes. Unlike conventional methods, we consider settings where 3D models cannot be retrieved from a remote server. Our method involves generating octree models from voxelized 3D models and storing them in QR codes using a space-efficient data structure. This allows storing 3D models that are both intelligible and purposeful on standard QR codes while addressing the major storage constraint that is present in offline situations. Furthermore, we explore 3D convolutional neural networks (CNN) and autoencoders (AE) to compress 3D models with high resolutions where using octrees alone does not suffice. To the best of our knowledge, our AE network is the first to employ octrees to further compress its encoded data. Through user-friendly desktop and mobile applications, we allow users to encode, decode and visualize 3D models in augmented reality (AR) using QR codes, thus experiment with our methods. The proposed approach enables unique applications and future research in ubiquitous computing, 3D data compression and transmission, 3D AEs, AR and Virtual Reality, low-cost autonomous robots, and 3D printing.

Original languageEnglish
Article numbere1959
Pages (from-to)1-15
Number of pages15
JournalComputer Animation and Virtual Worlds
Volume31
Issue number4-5
DOIs
Publication statusPublished - 1 Jul 2020

Keywords

  • 3D data compression
  • augmented reality
  • autoencoders
  • human–computer interaction
  • robotics and vision
  • ubiquitous computing
  • volume rendering

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this