Abstract
Achieving photo-realistic result is an enticing proposition for the computer graphics community. Great progress has been
achieved in the past decades, but the cost of human expertise has also grown. Neural rendering is a promising candidate for
reducing this cost as it relies on data to construct the scene representation. However, one key component for adapting neural
rendering for practical use is currently missing: animation. There seems to be a lack of discussion on how to enable neural
rendering works for synthesizing frames for unseen animations. To fill this research gap, we propose neural proxy, a novel
neural rendering model that utilizes animatable proxies for representing photo-realistic targets. Via a tactful combination of
components from neural volume rendering and neural texture, our model is able to render unseen animations without any
temporal learning. Experiment results show that the proposed model significantly outperforms current neural rendering works.
achieved in the past decades, but the cost of human expertise has also grown. Neural rendering is a promising candidate for
reducing this cost as it relies on data to construct the scene representation. However, one key component for adapting neural
rendering for practical use is currently missing: animation. There seems to be a lack of discussion on how to enable neural
rendering works for synthesizing frames for unseen animations. To fill this research gap, we propose neural proxy, a novel
neural rendering model that utilizes animatable proxies for representing photo-realistic targets. Via a tactful combination of
components from neural volume rendering and neural texture, our model is able to render unseen animations without any
temporal learning. Experiment results show that the proposed model significantly outperforms current neural rendering works.
Original language | English |
---|---|
Title of host publication | Proceedings of Pacic Conference on Computer Graphics and Applications |
Publisher | Eurographics Association at Graz University of Technology |
Pages | 1-6 |
Publication status | Published - Oct 2021 |