Skip to main content

FID calculation in PyTorch with proper image resizing and quantization steps

Project description

clean-fid for Evaluating Generative Models


Project | Paper | Colab Demo | Leaderboard

The FID calculation involves many steps that can produce inconsistencies in the final metric. As shown below, different implementations use different low-level image quantization and resizing functions, the latter of which are often implemented incorrectly.

We provide an easy-to-use library to address the above issues and make the FID scores comparable across different methods, papers, and groups.

FID Steps


On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation
Gaurav Parmar, Richard Zhang, Jun-Yan Zhu
arXiv 2104.11222, 2021
CMU and Adobe


CleanFID Leaderboard for common tasks

We compute the FID scores using the corresponding methods used in the original papers and using the Clean-FID proposed here. All values are computed using 10 evaluation runs.
We provide an API to query the results shown in the tables below directly from the pip package. The arguments model_name, dataset_name, dataset_res, dataset_split, task_name can be used to filter the results.

CIFAR-10

Model Checkpoint Reported-FID Legacy-FID (reproduced) Clean-FID Reference Split # reference images used # generated images used dataset_name dataset_res task_name
stylegan2-mirror-flips (100%) ckpt 11.07 11.07 ± 0.10 12.96 ± 0.07 test 10000 10000 cifar10 32 few_shot_generation
stylegan2-diff-augment (100%) ckpt 9.89 9.90 ± 0.09 10.85 ± 0.10 test 10000 10000 cifar10 32 few_shot_generation
stylegan2-mirror-flips (20%) ckpt 23.08 23.01 ± 0.19 29.49 ± 0.17 test 10000 10000 cifar10 32 few_shot_generation
stylegan2-diff-augment (20%) ckpt 12.15 12.12 ± 0.15 14.18 ± 0.13 test 10000 10000 cifar10 32 few_shot_generation
stylegan2-mirror-flips (10%) ckpt 36.02 35.94 ± 0.17 43.60 ± 0.17 test 10000 10000 cifar10 32 few_shot_generation
stylegan2-diff-augment (10%) ckpt 14.50 14.53 ± 0.12 16.98 ± 0.18 test 10000 10000 cifar10 32 few_shot_generation

FFHQ @ 1024x1024

Model Checkpoint Reported-FID Legacy-FID (reproduced) Clean-FID Reference Split # generated images used # reference images used
stylegan2 ckpt 2.84 2.86 ± 0.025 3.07 ± 0.025 trainval 50,000 50,000
stylegan2 ckpt N/A 2.76 ± 0.025 2.98 ± 0.025 trainval70k 50,000 70,000

LSUN Categories

Category Model Checkpoint Reported-FID Legacy-FID (reproduced) Clean-FID Reference Split # generated images used # reference images used
Outdoor Churches stylegan2 ckpt 3.86 3.87 ± 0.029 4.08 ± 0.028 train 50,000 50,000
Horses stylegan2 ckpt 3.43 3.41 ± 0.021 3.62 ± 0.023 train 50,000 50,000
Cat stylegan2 ckpt 6.93 7.02 ± 0.039 7.47 ± 0.035 train 50,000 50,000

FFHQ @ 256x256 (Few Show Generation)

Model Checkpoint Reported-FID Legacy-FID (reproduced) Clean-FID Reference Split # generated images used # reference images used
stylegan2 1k ckpt 62.16 62.14 ± 0.108 64.17 ± 0.113 trainval70k 50,000 70,000
DiffAugment-stylegan2 1k ckpt 25.66 25.60 ± 0.071 27.26 ± 0.077 trainval70k 50,000 70,000
stylegan2 5k ckpt 26.60 26.64 ± 0.086 28.17 ± 0.090 trainval70k 50,000 70,000
DiffAugment-stylegan2 5k ckpt 10.45 10.45 ± 0.047 10.99 ± 0.050 trainval70k 50,000 70,000
stylegan2 10k ckpt 14.75 14.88 ± 0.070 16.04 ± 0.078 trainval70k 50,000 70,000
DiffAugment-stylegan2 10k ckpt 7.86 7.82 ± 0.045 8.12 ± 0.044 trainval70k 50,000 70,000
stylegan2 30k ckpt 6.16 6.14 ± 0.064 6.49 ± 0.068 trainval70k 50,000 70,000
DiffAugment-stylegan2 30k ckpt 5.05 5.07 ± 0.030 5.18 ± 0.032 trainval70k 50,000 70,000

LSUN CAT (Few Shot Generation)

Model Checkpoint Reported-FID Legacy-FID (reproduced) Clean-FID Reference Split # reference images used # generated images used dataset_name dataset_res task_name
stylegan2-mirror-flips (30k) ckpt 10.12 10.15 ± 0.04 10.87 ± 0.04 trainfull 1657264 50000 lsun_cat 256 few_shot_generation
stylegan2-diff-augment (30k) ckpt 9.68 9.70 ± 0.07 10.25 ± 0.07 trainfull 1657264 50000 lsun_cat 256 few_shot_generation
stylegan2-mirror-flips (10k) ckpt 17.93 17.98 ± 0.09 18.71 ± 0.09 trainfull 1657264 50000 lsun_cat 256 few_shot_generation
stylegan2-diff-augment (10k) ckpt 12.07 12.04 ± 0.08 12.53 ± 0.08 trainfull 1657264 50000 lsun_cat 256 few_shot_generation
stylegan2-mirror-flips (5k) ckpt 34.69 34.66 ± 0.12 35.85 ± 0.12 trainfull 1657264 50000 lsun_cat 256 few_shot_generation
stylegan2-diff-augment (5k) ckpt 16.11 16.11 ± 0.09 16.79 ± 0.09 trainfull 1657264 50000 lsun_cat 256 few_shot_generation
stylegan2-mirror-flips (1k) ckpt 182.85 182.80 ± 0.21 185.86 ± 0.21 trainfull 1657264 50000 lsun_cat 256 few_shot_generation
stylegan2-diff-augment (1k) ckpt 42.26 42.07 ± 0.16 43.12 ± 0.16 trainfull 1657264 50000 lsun_cat 256 few_shot_generation

AFHQ

Model Checkpoint Reported-FID Legacy-FID (reproduced) Clean-FID Reported-KID (x 10^3) Legacy-KID (reproduced) (x 10^3) Clean-KID (x 10^3) Reference Split # reference images used # generated images used dataset_name dataset_res task_name
stylegan2 ckpt 19.37 19.34 ± 0.08 20.10 ± 0.08 9.62 9.56 ± 0.12 10.21 ± 0.11 train 4739 50000 afhq_dog 512 few_shot_generation
stylegan2-ada ckpt 7.40 7.41 ± 0.02 7.61 ± 0.02 1.16 1.17 ± 0.03 1.28 ± 0.03 train 4739 50000 afhq_dog 512 few_shot_generation
stylegan2 ckpt 3.48 3.55 ± 0.03 3.66 ± 0.02 0.77 0.78 ± 0.02 0.83 ± 0.01 train 4738 50000 afhq_wild 512 few_shot_generation
stylegan2-ada ckpt 3.05 3.01 ± 0.02 3.03 ± 0.02 0.45 0.45 ± 0.01 0.45 ± 0.01 train 4738 50000 afhq_wild 512 few_shot_generation

BreCaHAD

Model Checkpoint Reported-FID Legacy-FID (reproduced) Clean-FID Reported-KID (x 10^3) Legacy-KID (reproduced) (x 10^3) Clean-KID (x 10^3) Reference Split # reference images used # generated images used dataset_name dataset_res task_name
stylegan2 ckpt 97.72 97.46 ± 0.17 98.35 ± 0.17 89.76 89.90 ± 0.31 92.51 ± 0.32 train 1944 50000 brecahad 512 few_shot_generation

MetFaces

Model Checkpoint Reported-FID Legacy-FID (reproduced) Clean-FID Reported-KID (x 10^3) Legacy-KID (reproduced) (x 10^3) Clean-KID (x 10^3) Reference Split # reference images used # generated images used dataset_name dataset_res task_name
stylegan2 ckpt 57.26 57.36 ± 0.10 65.74 ± 0.11 35.66 35.69 ± 0.16 40.90 ± 0.14 train 1336 50000 metfaces 1024 few_shot_generation

Horse2Zebra (Image to Image)

Model Reported-FID Legacy-FID (reproduced) Clean-FID Reference Split # generated images used # reference images used
CUT 45.5 45.51 43.71 test 120 140
FastCUT 73.4 73.38 72.53 test 120 140

Cat2Dog (Image to Image)

Model Reported-FID Legacy-FID (reproduced) Clean-FID Reference Split # generated images used # reference images used
CUT 76.2 76.21 77.58 test 500 500
FastCUT 94.0 93.95 95.37 test 500 500


Buggy Resizing Operations

The definitions of resizing functions are mathematical and should never be a function of the library being used. Unfortunately, implementations differ across commonly-used libraries. They are often implemented incorrectly by popular libraries. Try out the different resizing implementations in the Google colab notebook here.


The inconsistencies among implementations can have a drastic effect of the evaluations metrics. The table below shows that FFHQ dataset images resized with bicubic implementation from other libraries (OpenCV, PyTorch, TensorFlow, OpenCV) have a large FID score (≥ 6) when compared to the same images resized with the correctly implemented PIL-bicubic filter. Other correctly implemented filters from PIL (Lanczos, bilinear, box) all result in relatively smaller FID score (≤ 0.75). Note that since TF 2.0, the new flag antialias (default: False) can produce results close to PIL. However, it was not used in the existing TF-FID repo and set as False by default.

JPEG Image Compression

Image compression can have a surprisingly large effect on FID. Images are perceptually indistinguishable from each other but have a large FID score. The FID scores under the images are calculated between all FFHQ images saved using the corresponding JPEG format and the PNG format.

Below, we study the effect of JPEG compression for StyleGAN2 models trained on the FFHQ dataset (left) and LSUN outdoor Church dataset (right). Note that LSUN dataset images were collected with JPEG compression (quality 75), whereas FFHQ images were collected as PNG. Interestingly, for LSUN dataset, the best FID score (3.48) is obtained when the generated images are compressed with JPEG quality 87.


Quick Start

  • install requirements

    pip install -r requirements.txt
    
  • install the library

    pip install clean-fid
    
  • Compute FID between two image folders

    from cleanfid import fid
    
    score = fid.compute_fid(fdir1, fdir2)
    
  • Compute FID between one folder of images and pre-computed datasets statistics (e.g., FFHQ)

    from cleanfid import fid
    
    score = fid.compute_fid(fdir1, dataset_name="FFHQ", dataset_res=1024)
    
    
  • Compute FID using a generative model and pre-computed dataset statistics:

    from cleanfid import fid
    
    # function that accepts a latent and returns an image in range[0,255]
    gen = lambda z: GAN(latent=z, ... , <other_flags>)
    
    score = fid.compute_fid(gen=gen, dataset_name="FFHQ",
            dataset_res=256, num_gen=50_000)
    
    

Supported Precomputed Datasets

We provide precompute statistics for the following commonly used configurations

Task Dataset Resolution Reference Split # Reference Images mode
Image Generation cifar10 32 train 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation cifar10 32 test 10,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation ffhq 1024, 256 trainval 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation ffhq 1024, 256 trainval70k 70,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation lsun_church 256 train 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation lsun_horse 256 train 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation lsun_cat 256 train 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation lsun_cat 256 trainfull 1,657,264 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation afhq_cat 512 train 5153 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation afhq_dog 512 train 4739 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation afhq_wild 512 train 4738 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation brecahad 512 train 1944 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation metfaces 1024 train 1336 clean, legacy_tensorflow, legacy_pytorch
Image to Image horse2zebra 256 test 140 clean, legacy_tensorflow, legacy_pytorch
Image to Image cat2dog 256 test 500 clean, legacy_tensorflow, legacy_pytorch

Using precomputed statistics In order to compute the FID score with the precomputed dataset statistics, use the corresponding options. For instance, to compute the clean-fid score on generated 256x256 FFHQ images use the command:

fid_score = fid.compute_fid(fdir1, dataset_name="ffhq", dataset_res=256,  mode="clean", dataset_split="trainval70k")

Create Custom Dataset Statistics

  • dataset_path: folder where the dataset images are stored

  • custom_name: name to be used for the statistics

  • Generating custom statistics (saved to local cache)

    from cleanfid import fid
    fid.make_custom_stats(custom_name, dataset_path, mode="clean")
    
  • Using the generated custom statistics

    from cleanfid import fid
    score = fid.compute_fid("folder_fake", dataset_name=custom_name,
              mode="clean", dataset_split="custom")
    
  • Removing the custom stats

    from cleanfid import fid
    fid.remove_custom_stats(custom_name, mode="clean")
    
  • Check if a custom statistic already exists

    from cleanfid import fid
    fid.test_stats_exists(custom_name, mode)
    

Backwards Compatibility

We provide two flags to reproduce the legacy FID score.

  • mode="legacy_pytorch"
    This flag is equivalent to using the popular PyTorch FID implementation provided here
    The difference between using clean-fid with this option and code is ~2e-06
    See doc for how the methods are compared

  • mode="legacy_tensorflow"
    This flag is equivalent to using the official implementation of FID released by the authors.
    The difference between using clean-fid with this option and code is ~2e-05
    See doc for detailed steps for how the methods are compared


Building clean-fid locally from source

python setup.py bdist_wheel
pip install dist/*

Citation

If you find this repository useful for your research, please cite the following work.

@article{parmar2021cleanfid,
  title={On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation},
  author={Parmar, Gaurav and Zhang, Richard and Zhu, Jun-Yan},
  journal={arXiv preprint arXiv:2104.11222},
  year={2021}
}

Related Projects

torch-fidelity: High-fidelity performance metrics for generative models in PyTorch.
TTUR: Two time-scale update rule for training GANs.
LPIPS: Perceptual Similarity Metric and Dataset.

Credits

PyTorch-StyleGAN2 (LICENSE)

PyTorch-FID (LICENSE)

StyleGAN2 (LICENSE)

converted FFHQ weights: code | LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

clean_fid-0.1.12-py3-none-any.whl (19.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page