Anime Face Generation

Team Project

Generate anime face images from text using Conditional Generative Adversarial Network (Conditional GAN).

Figure 1. Generated anime face images.

Proposed by Ian Goodfellow in 2014, Generative Adversarial Networks (GAN) has been applied to various applications, such as image super-resolution, style transfer, image inpainting, etc. In this project, we generate anime face images from given text using Conditional GAN, which is a variant of GAN proposed by Mirza et al. in 2014. In addition to the input image x and a random vector z, Conditional GAN also takes condition y as input for both of its generator and discriminator. The architecture of Conditional GAN is illustrated below in Figure 2.

Figure 2. The architecture of Conditional GAN.

In this project, we generate anime face images based on two conditions: hair colors and eye colors. For example, given a text input "green hair blue eyes," the model will generate an anime character with green hair and blue eyes. Exemplary results are shown above in Figure 1. For more information, please refer to our technical report (in Chinese) or our poster (in English). Our code is also publically available at GitHub.