Cantonese Porcelain Image Generation Using User-Guided Generative Adversarial Networks

Steven Szu Chi Chen, Hui Cui, Peng Tan, Xiaohong Sun, Yi Ji, Henry Duh, Mike Potel

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)


Automated image style transfer is of great interest given the recent advances in generative adversarial networks (GANs). However, it is challenging to generate synthesized images from abstract masks while preserving detailed patterns for certain kinds of art given small datasets. We propose an intelligent GAN-based system enhanced with user intent and prior knowledge for generating images styled as Cantonese porcelain using user-defined masks. Given a mask with specified objects, our system first generates a synthesized natural image. We then use a novel semantic user intent enhancement module to retrieve semantically relevant images from an image dataset. Objects in the retrieved image are used to refine local patterns in the synthesized image. Finally, the refined image is restyled in the Cantonese porcelain style. The system is trained by 454 pairs of natural images and semantic segmentation of 24 objects from the COCO dataset for synthesized image generation from masks, and 1445 Cantonese porcelain images for style transfer. Experimental results and ablation studies demonstrate that the synthesized and restyled images were improved with local details and enhanced contrast.

Original languageEnglish
Article number9175078
Pages (from-to)100-107
Number of pages8
JournalIEEE Computer Graphics and Applications
Issue number5
Publication statusPublished - 1 Sep 2020
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Cantonese Porcelain Image Generation Using User-Guided Generative Adversarial Networks'. Together they form a unique fingerprint.

Cite this