standardise the FID
Project description
clean-fid: Fixing Inconsistencies in FID
The FID calculation involves many steps that can produce inconsistencies in the final metric. As shown below, different implementations use different low-level image quantization and resizing functions, the latter of which are often implemented incorrectly.
We provide an easy-to-use library to address the above issues and make the FID scores comparable across different methods, papers, and groups.
On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation
Gaurav Parmar, Richard Zhang, Jun-Yan Zhu
In arXiv 1811.10959
CMU and Adobe
Buggy Resizing Operations
The definitions of resizing functions are mathematical and should never be a function of the library being used. Unfortunately, implementations differ across commonly-used libraries. They are often implemented incorrectly by popular libraries.
The inconsistencies among implementations can have a drastic effect of the evaluations metrics. The table below shows that FFHQ dataset images resized with bicubic implementation from other libraries (OpenCV, PyTorch, TensorFlow, OpenCV) have a large FID score (≥ 6) when compared to the same images resized with the correctly implemented PIL-bicubic filter. Other correctly implemented filters from PIL (Lanczos, bilinear, box) all result in relatively smaller FID score (≤ 0.75).
JPEG Image Compression
Image compression can have a surprisingly large effect on FID. Images are perceptually indistinguishable from each other but have a large FID score. The FID scores under the images are calculated between all FFHQ images saved using the corresponding JPEG format and the PNG format.
Below, we study the effect of JPEG compression for StyleGAN2 models trained on the FFHQ dataset (left) and LSUN outdoor Church dataset (right). Note that LSUN dataset images were collected with JPEG compression (quality 75), whereas FFHQ images were collected as PNG. Interestingly, for LSUN dataset, the best FID score (3.48) is obtained when the generated images are compressed with JPEG quality 87.
Quick Start
-
install requirements
pip install -r requirements.txt
-
install the library
pip install clean-fid
-
FID between two image folders
from cleanfid import fid score = fid.compare_folders(fdir1, fdir2, num_workers=0, batch_size=8, device=torch.device("cuda"), use_legacy_pytorch=False, use_legacy_tensorflow=False,)
-
FID of a folder of generated images
from cleanfid import fid score = fid.fid_folder(fdir, dataset_name="FFHQ", dataset_res=1024, model=None, use_legacy_pytorch=False, use_legacy_tensorflow=False, num_workers=12, batch_size=128, device=torch.device("cuda"))
-
FID inline
from cleanfid import fid # function that accepts a latent and returns an image in range[0,255] gen = lambda z: return GAN(latent=z, ... , <other_flags>) fid_score = fid.fid_model(gen, dataset_name="FFHQ, dataset_res=1024, model=None, z_dim=512, num_fid=50_000, use_legacy_pytorch=False, use_legacy_tensorflow=False, num_workers=0, batch_size=128, device=torch.device("cuda"))
Make Custom Dataset Statistics
- dataset_path: folder where the dataset images are stored
- Generate and save the inception statistics
import numpy as np from cleanfid import fid dataset_path = ... feat = fid.get_folder_features(dataset_path, num=50_000) mu = np.mean(feats, axis=0) sigma = np.cov(feats, rowvar=False) np.savez_compressed("stats.npz", mu=mu, sigma=sigma)
Backwards Compatibility
We provide two flags to reproduce the legacy FID score.
-
use_legacy_pytorch
This flag is equivalent to using the popular PyTorch FID implementation provided here
The difference between using CleanFID withuse_legacy_pytorch
flag and code is ~1.9e-06
See doc for how the methods are compared -
use_legacy_tensorflow
This flag is equivalent to using the official implementation of FID released by the authors. To use this flag, you need to additionally install tensorflow. The tensorflow version may cause issues with the pytorch code. I have tested this with TensorFlow-cpu 2.2 (`pip install tensorflow-cpu==2.2)
CleanFID Leaderboard for common tasks
FFHQ @ 1024x1024
Model | Legacy-FID | Clean-FID |
---|---|---|
StyleGAN2 | 2.85 ± 0.05 | 3.08 ± 0.05 |
StyleGAN | 4.44 ± 0.04 | 4.82 ± 0.04 |
MSG-GAN | 6.09 ± 0.04 | 6.58 ± 0.06 |
Image-to-Image (horse->zebra @ 256x256) Computed using test images
Model | Legacy-FID | Clean-FID |
---|---|---|
CycleGAN | 77.20 | 75.17 |
CUT | 45.51 | 43.71 |
Building from source
python setup.py bdist_wheel
pip install dist/*
Citation
If you find this repository useful for your research, please cite the following work.
@article{parmar2021cleanfid,
title={On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation},
author={Parmar, Gaurav and Zhang, Richard and Zhu, Jun-Yan},
journal={arXiv preprint arXiv:2104.11222},
year={2021}
}
Credits
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for clean_fid-0.1.1b0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 52c5ae054d15ff497262df5d605b866ae2474239cd5dfeb8f5d6d06838ae4644 |
|
MD5 | dbff8740bfa300c948d2d81a22f37d97 |
|
BLAKE2b-256 | 76050272d63c5e99e57f690fd4ec0367e8efc1dcb2464f7b9ad2739c47c5e3b4 |