PyTorch implementation of StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. StarGAN can flexibly translate an input image to any desired target domain using only a single generator and a discriminator.

Authors

Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sung Kim, and Jaegul Choo

Korea Universitiy, Clova AI Research (NAVER), The College of New Jersey, HKUST

 

Results

Facial Attribute Transfer on CelebA

The images are generated by StarGAN trained on the CelebA dataset.

Facial Expression Synthesis on RaFD

The images are generated by StarGAN trained on the RaFD dataset.

Facial Expression Synthesis on CelebA

The images are generated by StarGAN trained on both the CelebA and RaFD dataset.

 

Model Description

Training within a Single Dataset

Overview of StarGAN, consisting of two modules, a discriminator D and a generator G. (a) D learns to distinguish between real and fake images and classify the real images to its corresponding domain. (b) G takes in as input both the image and target domain label and generates an fake image. The target domain label is spatially replicated and concatenated with the input image. (c) G tries to reconstruct the original image from the fake image given the original domain label. (d) G tries to generate images indistinguishable from real images and classifiable as target domain by D.

Training with Multiple Datasets

Overview of StarGAN when training with both CelebA and RaFD. (a) ~ (d) shows the training process using CelebA, and (e) ~ (h) shows the training process using RaFD. (a), (e) The discriminator D learns to distinguish between real and fake images and minimize the classification error only for the known label. (b), (c), (f), (g) When the mask vector (purple) is [1, 0], the generator G learns to focus on the CelebA label (yellow) and ignore the RaFD label (green) to perform image-to-image translation, and vice versa when the mask vector is [0, 1]. (d), (h) G tries to generate images that are both indistinguishable from real images and classifiable by D as belonging to the target domain.

 

Prerequisites

 

Getting Started

1. Clone the repository

$ git clone https://github.com/yunjey/StarGAN.git
$ cd StarGAN/Copy the code

2. Download the dataset

(i) CelebA dataset
$ bash download.shCopy the code
(ii) RaFD dataset

Because RaFD is not a public dataset, you must first request access to the dataset from the Radboud Faces Database website. Then, you need to create the folder structure as decribed here.

3. Train StarGAN

(i) Training with CelebA
$ python main.py --mode='train' --dataset='CelebA' --c_dim=5 --image_size=128 --num_epochs=20 --num_epochs_decay=10Copy the code
(ii) Training with RaFD
$ python main.py --mode='train' --dataset='RaFD' --c_dim=8 --image_size=128 --num_epochs=200 --num_epochs_decay=100Copy the code
(iii) Training with CelebA+RaFD
$ python main.py --mode='train' --dataset='Both' --c_dim=5 --c2_dim=8 --image_size=256 --num_iters=200000 --num_iters_decay=100000Copy the code

4. Test StarGAN

(i) Facial attribute transfer on CelebA
$ python main.py --mode='test' --dataset='CelebA' --c_dim=5 --image_size=256 --test_model=20_1000Copy the code
(ii) Facial expression synthesis on RaFD
$ python main.py --mode='test' --dataset='RaFD' --c_dim=8 --image_size=256 --test_model=200_200Copy the code
(iii) Facial expression synthesis on CelebA
$ python main.py --mode='test' --dataset='Both' --c_dim=5 --c2_dim=8 --image_size=256 --test_model=200000Copy the code

 

Citation

@article{choi2017stargan,
 title = {StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation},    
 author = {Choi, Yunjey and Choi, Minje and Kim, Munyoung and Ha, Jung-Woo and Kim, Sunghun and Choo, Jaegul},
 journal= {arXiv preprint arXiv:1711.09020},
 Year = {2017}
}Copy the code

 

Acknowledgement

This work was mainly done while the first author did a research internship at Clova AI Research, NAVER (CLAIR). We also thank all the researchers at CLAIR, especially Donghyun Kwak, for insightful discussions.