Neural Proxy: Empowering Neural Volume Rendering for Animation

Ping Tat Sin, Hiu Fung Ng, Hong Va Leong

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

Abstract

Achieving photo-realistic result is an enticing proposition for the computer graphics community. Great progress has been
achieved in the past decades, but the cost of human expertise has also grown. Neural rendering is a promising candidate for
reducing this cost as it relies on data to construct the scene representation. However, one key component for adapting neural
rendering for practical use is currently missing: animation. There seems to be a lack of discussion on how to enable neural
rendering works for synthesizing frames for unseen animations. To fill this research gap, we propose neural proxy, a novel
neural rendering model that utilizes animatable proxies for representing photo-realistic targets. Via a tactful combination of
components from neural volume rendering and neural texture, our model is able to render unseen animations without any
temporal learning. Experiment results show that the proposed model significantly outperforms current neural rendering works.
Original languageEnglish
Title of host publicationProceedings of Pacic Conference on Computer Graphics and Applications
PublisherEurographics Association at Graz University of Technology
Pages1-6
Publication statusPublished - Oct 2021

Fingerprint

Dive into the research topics of 'Neural Proxy: Empowering Neural Volume Rendering for Animation'. Together they form a unique fingerprint.

Cite this