Abstract
Animating a character given an image is an interesting research problem that posed to have applicational values. Namely, it could help animators to automatically animate new characters with motions similar to previous characters. The problem also has value in digital entertainment for animating characters created by players such as drawn characters. There are a few deep learning video synthesis models that could generate a video given an image. However, it is unlikely there exists a dataset that could train them for animating characters since characters for digital entertainment tend to be more novel and diverse. Nor is it practical for animators to create a large dataset for training. To this end, a 2D-CharAnimNet is proposed. Empowered by a novel motion transfer scheme for video generation, the proposed variational-autoencoder-based model could use a relatively small dataset. In addition, to improve the fidelity of video frames, dynamic skip-connections along with a polishing generative adversarial networks are also proposed. Results seem to indicate that the model has encouraging potential in adapting for applicational uses.
Original language | English |
---|---|
Pages | 196-203 |
Number of pages | 8 |
DOIs | |
Publication status | Published - 1 Jan 2019 |
Event | 34th Annual ACM Symposium on Applied Computing, SAC 2019 - Limassol, Cyprus Duration: 8 Apr 2019 → 12 Apr 2019 |
Conference
Conference | 34th Annual ACM Symposium on Applied Computing, SAC 2019 |
---|---|
Country/Territory | Cyprus |
City | Limassol |
Period | 8/04/19 → 12/04/19 |
Keywords
- Animation
- Generative Model
- Motion Transfer
- Neural Networks
- Video Synthesis
ASJC Scopus subject areas
- Software