Skip to main content

PyTorch Toolbox for Image Quality Assessment

Project description

PyTorch Toolbox for Image Quality Assessment

An IQA toolbox with pure python and pytorch. Please refer to Awesome-Image-Quality-Assessment for a comprehensive survey of IQA methods, as well as download links for IQA datasets.

google colab logo PyPI visitors Awesome Citation

demo

:open_book: Introduction

This is a image quality assessment toolbox with pure python and pytorch. We provide reimplementation of many mainstream full reference (FR) and no reference (NR) metrics (results are calibrated with official matlab scripts if exist). With GPU acceleration, most of our implementations are much faster than Matlab. Below are details of supported methods and datasets in this project.

Supported methods and datasets:
FR Method Backward
AHIQ :white_check_mark:
PieAPP :white_check_mark:
LPIPS :white_check_mark:
DISTS :white_check_mark:
WaDIQaM :white_check_mark:
CKDN1 :white_check_mark:
FSIM :white_check_mark:
SSIM :white_check_mark:
MS-SSIM :white_check_mark:
CW-SSIM :white_check_mark:
PSNR :white_check_mark:
VIF :white_check_mark:
GMSD :white_check_mark:
NLPD :white_check_mark:
VSI :white_check_mark:
MAD :white_check_mark:
NR Method Backward
FID :heavy_multiplication_x:
MANIQA :white_check_mark:
MUSIQ :white_check_mark:
DBCNN :white_check_mark:
PaQ-2-PiQ :white_check_mark:
HyperIQA :white_check_mark:
NIMA :white_check_mark:
WaDIQaM :white_check_mark:
CNNIQA :white_check_mark:
NRQM(Ma)2 :heavy_multiplication_x:
PI(Perceptual Index) :heavy_multiplication_x:
BRISQUE :white_check_mark:
ILNIQE :white_check_mark:
NIQE :white_check_mark:
Dataset Type
FLIVE(PaQ-2-PiQ) NR
SPAQ NR/mobile
AVA NR/Aesthetic
PIPAL FR
BAPPS FR
PieAPP FR
KADID-10k FR
KonIQ-10k(++) NR
LIVEChallenge NR
LIVEM FR
LIVE FR
TID2013 FR
TID2008 FR
CSIQ FR

[1] This method use distorted image as reference. Please refer to the paper for details.
[2] Currently, only naive random forest regression is implemented and does not support backward.


:triangular_flag_on_post: Updates/Changelog

  • Sep 1, 2022. 1) Add pretrained models for MANIQA and AHIQ. 2) Add dataset interface for pieapp and PIPAL.
  • June 3, 2022. Add FID metric. See clean-fid for more details.
  • March 11, 2022. Add pretrained DBCNN, NIMA, and official model of PieAPP, paq2piq.
  • More

:hourglass_flowing_sand: TODO List

  • :white_large_square: Add pretrained models on different datasets.

:zap: Quick Start

Dependencies and Installation

  • Ubuntu >= 18.04
  • Python >= 3.8
  • Pytorch >= 1.10
  • CUDA >= 10.2 (if use GPU)
# Install with pip
pip install pyiqa

# Install latest github version
pip uninstall pyiqa # if have older version installed already 
pip install git+https://github.com/chaofengc/IQA-PyTorch.git

# Install with git clone
git clone https://github.com/chaofengc/IQA-PyTorch.git
cd IQA-PyTorch
pip install -r requirements.txt
python setup.py develop

Basic Usage

import pyiqa
import torch

# list all available metrics
print(pyiqa.list_models())

# create metric with default setting
iqa_metric = pyiqa.create_metric('lpips', device=torch.device('cuda'))
# Note that gradient propagation is disabled by default. set as_loss=True to enable it as a loss function.
iqa_loss = pyiqa.create_metric('lpips', device=torch.device('cuda'), as_loss=True)

# create metric with custom setting
iqa_metric = pyiqa.create_metric('psnr', test_y_channel=True, color_space='ycbcr').to(device)

# check if lower better or higher better
print(iqa_metric.lower_better)

# example for iqa score inference
# Tensor inputs, img_tensor_x/y: (N, 3, H, W), RGB, 0 ~ 1
score_fr = iqa_metric(img_tensor_x, img_tensor_y)
score_nr = iqa_metric(img_tensor_x)

# img path as inputs.
score_fr = iqa_metric('./ResultsCalibra/dist_dir/I03.bmp', './ResultsCalibra/ref_dir/I03.bmp')

# For FID metric, use directory or precomputed statistics as inputs
# refer to clean-fid for more details: https://github.com/GaParmar/clean-fid
fid_metric = pyiqa.create_metric('fid')
score = fid_metric('./ResultsCalibra/dist_dir/', './ResultsCalibra/ref_dir')
score = fid_metric('./ResultsCalibra/dist_dir/', dataset_name="FFHQ", dataset_res=1024, dataset_split="trainval70k")

Example Test script

Example test script with input directory/images and reference directory/images.

# example for FR metric with dirs
python inference_iqa.py -m LPIPS[or lpips] -i ./ResultsCalibra/dist_dir[dist_img] -r ./ResultsCalibra/ref_dir[ref_img]

# example for NR metric with single image
python inference_iqa.py -m brisque -i ./ResultsCalibra/dist_dir/I03.bmp

:hammer_and_wrench: Train

Dataset Preparation

  • You only need to unzip downloaded datasets from official website without any extra operation. And then make soft links of these dataset folder under datasets/ folder. Download links are provided in Awesome-Image-Quality-Assessment.
  • We provide common interface to load these datasets with the prepared meta information files and train/val/test split files, which can be downloaded from download_link and extract them to datasets/ folder.

You may also use the following commands:

mkdir datasets && cd datasets

# make soft links of your dataset
ln -sf your/dataset/path datasetname

# download meta info files and train split files
wget https://github.com/chaofengc/IQA-PyTorch/releases/download/v0.1-weights/data_info_files.tgz
tar -xvf data_info_files.tgz

Examples to specific dataset options can be found in ./options/default_dataset_opt.yml. Details of the dataloader inferface and meta information files can be found in Dataset Preparation

Example Train Script

Example to train DBCNN on LIVEChallenge dataset

# train for single experiment
python pyiqa/train.py -opt options/train/DBCNN/train_DBCNN.yml

# train N splits for small datasets
python pyiqa/train_nsplits.py -opt options/train/DBCNN/train_DBCNN.yml

:1st_place_medal: Benchmark Performances and Model Zoo

Results Calibration

Please refer to the results calibration to verify the correctness of the python implementations compared with official scripts in matlab or python.

Performance Evaluation Protocol

We use official models for evaluation if available. Otherwise, we use the following settings to train and evaluate different models for simplicity and consistency:

Metric Type Train Test Results
FR KADID-10k CSIQ, LIVE, TID2008, TID2013 FR benchmark results
NR KonIQ-10k LIVEC, KonIQ-10k (official split), TID2013 NR benchmark results
Aesthetic IQA AVA AVA (official split) IAA benchmark results

Basically, we use the largest existing datasets for training, and cross dataset evaluation performance for fair comparison. The following models do not provide official weights, and are retrained by our scripts:

Metric Type Model Names
FR
NR dbcnn
Aesthetic IQA nima, nima-vgg16-ava

Notes:

  • Due to optimized training process, performance of some retrained approaches may be higher than original paper.
  • Results of KonIQ-10k, AVA are both tested with official split.
  • NIMA is only applicable to AVA dataset now. We use inception_resnet_v2 for default nima.
  • MUSIQ is not included in the IAA benchmark because we do not have train/split information of the official model.

Benchmark Performance with Provided Script

Here is an example script to get performance benchmark on different datasets:

# NOTE: this script will test ALL specified metrics on ALL specified datasets
# Test default metrics on default datasets
python benchmark_results.py -m psnr ssim -d csiq tid2013 tid2008

# Test with your own options
python benchmark_results.py -m psnr --data_opt options/example_benchmark_data_opts.yml

python benchmark_results.py --metric_opt options/example_benchmark_metric_opts.yml tid2013 tid2008

python benchmark_results.py --metric_opt options/example_benchmark_metric_opts.yml --data_opt options/example_benchmark_data_opts.yml

:beers: Contribution

Any contributions to this repository are greatly appreciated. Please follow the contribution instructions for contribution guidance.

:scroll: License

This work is licensed under a NTU S-Lab License and Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Creative Commons License

:bookmark_tabs: Citation

If you find our codes helpful to your research, please consider to use the following citation:

@misc{pyiqa,
  title={{IQA-PyTorch}: PyTorch Toolbox for Image Quality Assessment},
  author={Chaofeng Chen and Jiadi Mo},
  year={2022},
  howpublished = "[Online]. Available: \url{https://github.com/chaofengc/IQA-PyTorch}"
}

:heart: Acknowledgement

The code architecture is borrowed from BasicSR. Several implementations are taken from: IQA-optimization, Image-Quality-Assessment-Toolbox, piq, piqa, clean-fid

We also thanks the following public repositories: MUSIQ, DBCNN, NIMA, HyperIQA, CNNIQA, WaDIQaM, PieAPP, paq2piq, MANIQA

:e-mail: Contact

If you have any questions, please email chaofenghust@gmail.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

v-pyiqa-0.1.5.tar.gz (154.3 kB view details)

Uploaded Source

Built Distribution

v_pyiqa-0.1.5-py3-none-any.whl (192.2 kB view details)

Uploaded Python 3

File details

Details for the file v-pyiqa-0.1.5.tar.gz.

File metadata

  • Download URL: v-pyiqa-0.1.5.tar.gz
  • Upload date:
  • Size: 154.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.14

File hashes

Hashes for v-pyiqa-0.1.5.tar.gz
Algorithm Hash digest
SHA256 95947fd78465c30817d7d9247e26cbffbe7da31f0424153a110f259bea656092
MD5 b2b7a000c28818f92858e22d2eb3e492
BLAKE2b-256 d3b5c3c19d94b573afa41ec091f1828c80d0680b3541262d55f3c4ec1533ce5f

See more details on using hashes here.

Provenance

File details

Details for the file v_pyiqa-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: v_pyiqa-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 192.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.14

File hashes

Hashes for v_pyiqa-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 ab51e79d96f7aeae2bce6377cde89ccf4f7d262505ab99f7b9b491981ced2648
MD5 62bd06a4d6539ad15e7a758ba562560f
BLAKE2b-256 29ac6fd0ef5abab175737f197f3b6669f791af700c1366869c034713847c79f2

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page