Skip to main content

PyTorch Toolbox for Image Quality Assessment

Project description

PyTorch Toolbox for Image Quality Assessment

An IQA toolbox with pure python and pytorch. Please refer to Awesome-Image-Quality-Assessment for a comprehensive survey of IQA methods and download links for IQA datasets.

google colab logo PyPI Downloads Documentation Status Awesome Citation Zhihu

:open_book: Introduction

This is a comprehensive image quality assessment (IQA) toolbox built with pure Python and PyTorch. We provide reimplementation of many mainstream full reference (FR) and no reference (NR) metrics (results are calibrated with official matlab scripts if exist). With GPU acceleration, most of our implementations are much faster than Matlab. Please refer to the following documents for details:


:triangular_flag_on_post: Updates/Changelog

  • 🎨Oct, 2024. Add perceptual color difference metric msswd proposed in MS-SWD (ECCV2024). Thanks to their work! 🤗
  • Sep, 2024. Add efficiency benchmark. With $1080\times800$ image as inputs, all metrics complete in under 1 second on the GPU (NVIDIA V100), and most of them, except for qalign and qalign_8bit, require less than 6GB of GPU memory.
  • Aug, 2024. Add qalign_4bit and qalign_8bit with much less memory requirement and similar performance.
  • Aug, 2024. Add piqe metric, and niqe_matlab, brisque_matlab with default matlab parameters (results have been calibrated with MATLAB R2021b).
  • 💥Aug, 2024. Add lpips+ and lpips-vgg+ proposed in our paper TOPIQ.
  • 🔥June, 2024. Add arniqa and its variances trained on different datasets, refer to official repo here. Thanks for the contribution from Lorenzo Agnolucci 🤗.
  • Apr 24, 2024. Add inception_score and console entry point with pyiqa command.
  • Mar 11, 2024. Add unique, refer to official repo here. Thanks for the contribution from Weixia Zhang 🤗.
  • More

:zap: Quick Start

Installation

# Install with pip
pip install pyiqa

# Install latest github version
pip uninstall pyiqa # if have older version installed already 
pip install git+https://github.com/chaofengc/IQA-PyTorch.git

# Install with git clone
git clone https://github.com/chaofengc/IQA-PyTorch.git
cd IQA-PyTorch
pip install -r requirements.txt
python setup.py develop

Basic Usage

You can simply use the package with commandline interface.

# list all available metrics
pyiqa -ls

# test with default settings
pyiqa [metric_name(s)] --target [image_path or dir] --ref [image_path or dir]

Advanced Usage with Codes

Test metrics

import pyiqa
import torch

# list all available metrics
print(pyiqa.list_models())

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

# create metric with default setting
iqa_metric = pyiqa.create_metric('lpips', device=device)

# check if lower better or higher better
print(iqa_metric.lower_better)

# example for iqa score inference
# Tensor inputs, img_tensor_x/y: (N, 3, H, W), RGB, 0 ~ 1
score_fr = iqa_metric(img_tensor_x, img_tensor_y)

# img path as inputs.
score_fr = iqa_metric('./ResultsCalibra/dist_dir/I03.bmp', './ResultsCalibra/ref_dir/I03.bmp')

# For FID metric, use directory or precomputed statistics as inputs
# refer to clean-fid for more details: https://github.com/GaParmar/clean-fid
fid_metric = pyiqa.create_metric('fid')
score = fid_metric('./ResultsCalibra/dist_dir/', './ResultsCalibra/ref_dir')
score = fid_metric('./ResultsCalibra/dist_dir/', dataset_name="FFHQ", dataset_res=1024, dataset_split="trainval70k")

Use as loss functions

Note that gradient propagation is disabled by default. Set as_loss=True to enable it as a loss function. Not all metrics support backpropagation, please refer to Model Cards and be sure that you are using it in a lower_better way.

lpips_loss = pyiqa.create_metric('lpips', device=device, as_loss=True)

ssim_loss = pyiqa.create_metric('ssimc', device=device, as_loss=True)
loss = 1 - ssim_loss(img_tensor_x, img_tensor_y)   # ssim is not lower better

Use custom settings and weights

We also provide a flexible way to use custom settings and weights in case you want to retrain or fine-tune the models.

iqa_metric = pyiqa.create_metric('topiq_nr', device=device, **custom_opts)

# Note that if you train the model with this package, the weights will be saved in weight_dict['params']. Otherwise, please set weight_keys=None.
iqa_metric.load_weights('path/to/weights.pth', weight_keys='params')

Example Test script

Example test script with input directory/images and reference directory/images.

# example for FR metric with dirs
python inference_iqa.py -m LPIPS[or lpips] -i ./ResultsCalibra/dist_dir[dist_img] -r ./ResultsCalibra/ref_dir[ref_img]

# example for NR metric with single image
python inference_iqa.py -m brisque -i ./ResultsCalibra/dist_dir/I03.bmp

:1st_place_medal: Benchmark Performances and Model Zoo

Results Calibration

Please refer to the results calibration to verify the correctness of the python implementations compared with official scripts in matlab or python.

⏬ Download Benchmark Datasets

For convenience, we upload all related datasets to huggingface IQA-Toolbox-Dataset, and corresponding meta information files to huggingface IQA-Toolbox-Dataset-metainfo. Here are example codes to download them from huggingface:

[!CAUTION] we only collect the datasets for academic, research, and educational purposes. It is important for the users to adhere to the usage guidelines, licensing terms, and conditions set forth by the original creators or owners of each dataset.

import os
from huggingface_hub import snapshot_download

save_dir = './datasets'
os.makedirs(save_dir, exist_ok=True)

filename = "koniq10k.tgz"
snapshot_download("chaofengc/IQA-Toolbox-Datasets", repo_type="dataset", local_dir=save_dir, allow_patterns=filename, local_dir_use_symlinks=False)

os.system(f"tar -xzvf {save_dir}/{filename} -C {save_dir}")

Download meta information from Huggingface with git clone or update with git pull:

cd ./datasets
git clone https://huggingface.co/datasets/chaofengc/IQA-Toolbox-Datasets-metainfo meta_info

cd ./datasets/meta_info
git pull

Examples to specific dataset options can be found in ./options/default_dataset_opt.yml. Details of the dataloader interface and meta information files can be found in Dataset Preparation

Performance Evaluation Protocol

We use official models for evaluation if available. Otherwise, we use the following settings to train and evaluate different models for simplicity and consistency:

Metric Type Train Test Results
FR KADID-10k CSIQ, LIVE, TID2008, TID2013 FR benchmark results
NR KonIQ-10k LIVEC, KonIQ-10k (official split), TID2013, SPAQ NR benchmark results
Aesthetic IQA AVA AVA (official split) IAA benchmark results
Efficiency CPU/GPU Time, GPU Memory Average on $1080\times800$ image inputs Efficiency benchmark

Results are calculated with:

  • PLCC without any correction. Although test time value correction is common in IQA papers, we want to use the original value in our benchmark.
  • Full image single input. We do not use multi-patch testing unless necessary.

Basically, we use the largest existing datasets for training, and cross dataset evaluation performance for fair comparison. The following models do not provide official weights, and are retrained by our scripts:

Metric Type Reproduced Models
FR wadiqam_fr
NR cnniqa, dbcnn, hyperiqa, wadiqam_nr
Aesthetic IQA nima, nima-vgg16-ava

[!NOTE]

  • Due to optimized training process, performance of some retrained approaches may be different with original paper.
  • Results of all retrained models by ours are normalized to [0, 1] and change to higher better for convenience.
  • Results of KonIQ-10k, AVA are both tested with official split.
  • NIMA is only applicable to AVA dataset now. We use inception_resnet_v2 for default nima.
  • MUSIQ is not included in the IAA benchmark because we do not have train/split information of the official model.

Benchmark Performance with Provided Script

Here is an example script to get performance benchmark on different datasets:

# NOTE: this script will test ALL specified metrics on ALL specified datasets
# Test default metrics on default datasets
python benchmark_results.py -m psnr ssim -d csiq tid2013 tid2008

# Test with your own options
python benchmark_results.py -m psnr --data_opt options/example_benchmark_data_opts.yml

python benchmark_results.py --metric_opt options/example_benchmark_metric_opts.yml tid2013 tid2008

python benchmark_results.py --metric_opt options/example_benchmark_metric_opts.yml --data_opt options/example_benchmark_data_opts.yml

:hammer_and_wrench: Train

Example Train Script

Example to train DBCNN on LIVEChallenge dataset

# train for single experiment
python pyiqa/train.py -opt options/train/DBCNN/train_DBCNN.yml

# train N splits for small datasets
python pyiqa/train_nsplits.py -opt options/train/DBCNN/train_DBCNN.yml

Example for distributed training

torchrun --nproc_per_node=2 --master_port=4321 pyiqa/train.py -opt options/train/CLIPIQA/train_CLIPIQA_koniq10k.yml --launcher pytorch

:beers: Contribution

Any contributions to this repository are greatly appreciated. Please follow the contribution instructions for contribution guidance.

:scroll: License

This work is licensed under a NTU S-Lab License and Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Creative Commons License

:bookmark_tabs: Citation

If you find our codes helpful to your research, please consider to use the following citation:

@misc{pyiqa,
  title={{IQA-PyTorch}: PyTorch Toolbox for Image Quality Assessment},
  author={Chaofeng Chen and Jiadi Mo},
  year={2022},
  howpublished = "[Online]. Available: \url{https://github.com/chaofengc/IQA-PyTorch}"
}

Please also consider to cite our works on image quality assessment if it is useful to you:

@article{chen2024topiq,
  author={Chen, Chaofeng and Mo, Jiadi and Hou, Jingwen and Wu, Haoning and Liao, Liang and Sun, Wenxiu and Yan, Qiong and Lin, Weisi},
  title={TOPIQ: A Top-Down Approach From Semantics to Distortions for Image Quality Assessment}, 
  journal={IEEE Transactions on Image Processing}, 
  year={2024},
  volume={33},
  pages={2404-2418},
  doi={10.1109/TIP.2024.3378466}
}
@article{wu2024qalign,
  title={Q-Align: Teaching LMMs for Visual Scoring via Discrete Text-Defined Levels},
  author={Wu, Haoning and Zhang, Zicheng and Zhang, Weixia and Chen, Chaofeng and Li, Chunyi and Liao, Liang and Wang, Annan and Zhang, Erli and Sun, Wenxiu and Yan, Qiong and Min, Xiongkuo and Zhai, Guangtai and Lin, Weisi},
  journal={International Conference on Machine Learning (ICML)},
  year={2024},
  institution={Nanyang Technological University and Shanghai Jiao Tong University and Sensetime Research},
  note={Equal Contribution by Wu, Haoning and Zhang, Zicheng. Project Lead by Wu, Haoning. Corresponding Authors: Zhai, Guangtai and Lin, Weisi.}
}

:heart: Acknowledgement

The code architecture is borrowed from BasicSR. Several implementations are taken from: IQA-optimization, Image-Quality-Assessment-Toolbox, piq, piqa, clean-fid

We also thanks the following public repositories: MUSIQ, DBCNN, NIMA, HyperIQA, CNNIQA, WaDIQaM, PieAPP, paq2piq, MANIQA

:e-mail: Contact

If you have any questions, please email chaofenghust@gmail.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyiqa-0.1.13.tar.gz (223.2 kB view details)

Uploaded Source

Built Distribution

pyiqa-0.1.13-py3-none-any.whl (261.3 kB view details)

Uploaded Python 3

File details

Details for the file pyiqa-0.1.13.tar.gz.

File metadata

  • Download URL: pyiqa-0.1.13.tar.gz
  • Upload date:
  • Size: 223.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for pyiqa-0.1.13.tar.gz
Algorithm Hash digest
SHA256 6312fb450b1ca7559193b4808ea1aa803707857505f89ce4cef356d51bd72d8b
MD5 9253af42663e454469ae4124f68bf1aa
BLAKE2b-256 bfdc733e4ed3d2cfb6a9a7e728ea2d8bf861d8ab7c72adede3188d8927a51309

See more details on using hashes here.

File details

Details for the file pyiqa-0.1.13-py3-none-any.whl.

File metadata

  • Download URL: pyiqa-0.1.13-py3-none-any.whl
  • Upload date:
  • Size: 261.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.4

File hashes

Hashes for pyiqa-0.1.13-py3-none-any.whl
Algorithm Hash digest
SHA256 306483a6f5b667d7bfd8185ee9142b2a5722757c69752434f393c610b9ad2313
MD5 bd2faddf9ca92587bc4367129b22ab3c
BLAKE2b-256 86a66d4c1594f3ab2384d1bba948743def6150b430af682e940a7685cf31a9ef

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page