Skip to main content

StyleGAN, ProGAN, and ResNet GANs to experiment with

Project description

GAN Lab

1024x1024 images generated from a StyleGAN trained on FFHQ (not yet fully trained):

GAN Lab currently supports:

Each GAN model's default settings emulates its most recent official implementation, but at the same time this package features a configuration file (config.py) where the user can quickly tune an extensive list of hyperparameter settings to his/her choosing.

Comes with additional features such as supervised learning capabilities, easy-to-use methods for saving/loading pretrained models, flexible learning rate scheduling (and re-scheduling) capabilities, etc.

This package aims for an intuitive API without sacrificing any complexity anywhere.


$ pip install gan-lab

This will install all necessary dependencies for you and will enable the option to use the package like an API (see "Jupyter Notebook (or Custom Script) Usage" below).

If you do not wish to use the package like an API (i.e. you just want to install dependencies and then just use the repo by means of running train.py, like shown below in the "Basic Usage on Command-line" section), you can run '$ pip install -r requirements.txt' instead.

Basic Usage on Command-line

Clone this repo, then simply run the following to configure your model & dataset and train your chosen model:

$ python config.py [model] [--optional_kwargs]
$ python data_config.py [dataset] [dataset_dir] [--optional_kwargs]
$ python train.py

The model will be saved into the "./gan_lab/models" directory by default.

If you would like to see a list of what each argument does, run '$ python config.py [model] -h' or '$ python data_config.py [dataset] [dataset_dir] -h' on the command-line.

NOTE: Make sure that all images you would like to use in your model are located directly inside the dataset_dir parent directory before running data_config.py. Any images within subdirectories of dataset_dir (except for the subdirectories named "train" or "valid" that get created when you run data_config.py) will not be used when training your model.

StyleGAN Example:

A StyleGAN Generator that yields 128x128 images can be created by running the following 3 lines. Below is a snapshot of images as the StyleGAN progressively grows. Ofcourse, this is not the only configuration that works:

$ python config.py stylegan --loss=nonsaturating --gradient_penalty=R1 --res_samples=128 --num_main_iters=1071000 --nimg_transition=630000 --batch_size=8 --enable_cudnn_autotuner --num_workers=12
$ python data_config.py FFHQ path/to/datasets/ffhq --enable_mirror_augmentation
$ python train.py

By default, image grids like the ones above are saved periodically during training into the "./gan_lab/samples" directory every 1,000 iterations (see config.py).

ProGAN Example:

A ProGAN Generator that yields 128x128 images like the ones below can be created by running the following 3 lines. Ofcourse, this is not the only configuration that works:

$ python config.py progan --res_samples=128 --num_main_iters=1050000 --batch_size=8
$ python data_config.py CelebA-HQ path/to/datasets/celeba_hq --enable_mirror_augmentation
$ python train.py

By default, image grids of generator output are saved periodically during training into the "./gan_lab/samples" directory every 1,000 iterations (see config.py).

ResNet GAN Example:

A ResNet GAN Generator can be created by running the following 3 lines (for example):

$ python config.py resnetgan --lr_base=.00015
$ python data_config.py LSUN-Bedrooms path/to/datasets/lsun_bedrooms
$ python train.py

[SAMPLES FOR RESNET GAN COMING SOON]

Jupyter Notebook (or Custom Script) Usage

Running train.py is just the very basic usage. This package can be imported and utilized in a modular manner as well (like an API). For example, often it's helpful to experiment inside a Jupyter Notebook, like in the example workflow below.

First, configure your GAN to your choosing on the command-line (like explained above under the "Basic Usage on Command-line" section):

$ python config.py stylegan
$ python data_config.py FFHQ path/to/datasets/ffhq

Then, write a custom script or Jupyter Notebook cells:

from gan_lab import get_current_configuration
from gan_lab.utils.data_utils import prepare_dataset, prepare_dataloader
from gan_lab.stylegan.learner import StyleGANLearner

# get most recent configurations:
config = get_current_configuration( 'config' )
data_config = get_current_configuration( 'data_config' )

# get DataLoader(s)
train_ds, valid_ds = prepare_dataset( data_config )
train_dl, valid_dl, z_valid_dl = prepare_dataloader( config, data_config, train_ds, valid_ds )

# instantiate StyleGANLearner and train:
learner = StyleGANLearner( config )
learner.train( train_dl, valid_dl, z_valid_dl )   # train for config.num_main_iters iterations
learner.config.num_main_iters = 300000            # this is one example of changing your instantiated learner's configurations
learner.train( train_dl, valid_dl, z_valid_dl )   # train for another 300000 iterations

# save your trained model:
learner.save_model( 'path/to/models/stylegan_model.tar' )

# later on, you can load this saved model by instantiating the same learner and then running load_model:
# learner = StyleGANLearner( config )
# learner.load_model( 'path/to/models/stylegan_model.tar' )

Some Advantages of Jupyter Notebook (there are many more than this):

  • You have the flexibility to think about what to do with your trained model after its trained rather than all at once, such as:
    • whether you want to save/load your trained model
    • what learner.config parameters you want to change before training again
  • You can always stop the kernel during training, do something else, and then resume again and it will work

NOTE that by default, the --num_workers argument in config.py is set to data-loading from just 1 subprocess; setting this to a larger number (that still falls within the constraints of your CPU(s)) will speed up training significantly. :slightly_smiling_face:

TODO:

  • Multi-GPU support
  • TensorBoard capabilities
  • FID, IS, and MS-SSIM metrics calculation
  • Incorporate Spectral Normalization
  • Incorporate Self-attention
  • Incorporate improvements from StyleGAN2 paper (https://arxiv.org/pdf/1912.04958.pdf)
  • TorchScript capabilities

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gan-lab-0.4.2.tar.gz (72.6 kB view details)

Uploaded Source

File details

Details for the file gan-lab-0.4.2.tar.gz.

File metadata

  • Download URL: gan-lab-0.4.2.tar.gz
  • Upload date:
  • Size: 72.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/51.0.0.post20201207 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.7.9

File hashes

Hashes for gan-lab-0.4.2.tar.gz
Algorithm Hash digest
SHA256 22156032b3b8c7dba933161c475d89464056e98a4e74adfb2e264abb95c60717
MD5 b475a93677834c4651d25390850edc5f
BLAKE2b-256 d4ca4a0c46a4f461b398f8b9a8a0b40adbf4e7aae5be9ce4ec29f89ad7ac5092

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page