Skip to main content

Measures and metrics for image2image tasks. PyTorch.

Project description

https://raw.githubusercontent.com/photosynthesis-team/piq/master/docs/source/_static/piq_logo_main.png

PyTorch Image Quality (PIQ) is not endorsed by Facebook, Inc.;

PyTorch, the PyTorch logo and any related marks are trademarks of Facebook, Inc.

Pypi Version Conda Version CI flake-8 style check CI testing codecov Quality Gate Status

PyTorch Image Quality (PIQ) is a collection of measures and metrics for image quality assessment. PIQ helps you to concentrate on your experiments without the boilerplate code. The library contains a set of measures and metrics that is continually getting extended. For measures/metrics that can be used as loss functions, corresponding PyTorch modules are implemented.

We provide:

  • Unified interface, which is easy to use and extend.

  • Written on pure PyTorch with bare minima of additional dependencies.

  • Extensive user input validation. You code will not crash in the middle of the training.

  • Fast (GPU computations available) and reliable.

  • Most metrics can be backpropagated for model optimization.

  • Supports python 3.6-3.8.

PIQ was initially named PhotoSynthesis.Metrics.

Installation

PyTorch Image Quality (PIQ) can be installed using pip, conda or git.

If you use pip, you can install it with:

$ pip install piq

If you use conda, you can install it with:

$ conda install piq -c photosynthesis-team -c conda-forge -c PyTorch

If you want to use the latest features straight from the master, clone PIQ repo:

git clone https://github.com/photosynthesis-team/piq.git
cd piq
python setup.py install

Documentation

The full documentation is available at https://piq.readthedocs.io.

Usage Examples

Image-based metrics

The group of metrics (such as PSNR, SSIM, BRISQUE) takes image or images as input. We have a functional interface, which returns a metric value, and a class interface, which allows to use any metric as a loss function.

import torch
from piq import ssim, SSIMLoss

x = torch.rand(4, 3, 256, 256, requires_grad=True)
y = torch.rand(4, 3, 256, 256)

ssim_index: torch.Tensor = ssim(x, y, data_range=1.)

loss = SSIMLoss(data_range=1.)
output: torch.Tensor = loss(x, y)
output.backward()

For a full list of examples, see image metrics examples.

Feature-based metrics

The group of metrics (such as IS, FID, KID) takes a list of image features. Image features can be extracted by some feature extractor network separately or by using the compute_feats method of a class.

Note:

compute_feats consumes a data loader of a predefined format.

import torch
from torch.utils.data import DataLoader
from piq import FID

first_dl, second_dl = DataLoader(), DataLoader()
fid_metric = FID()
first_feats = fid_metric.compute_feats(first_dl)
second_feats = fid_metric.compute_feats(second_dl)
fid: torch.Tensor = fid_metric(first_feats, second_feats)

If you already have image features, use the class interface for score computation:

import torch
from piq import FID

x_feats = torch.rand(10000, 1024)
y_feats = torch.rand(10000, 1024)
msid_metric = MSID()
msid: torch.Tensor = msid_metric(x_feats, y_feats)

For a full list of examples, see feature metrics examples.

List of metrics

Full Reference

Acronym

Year

Metric

PSNR

-

Peak Signal-to-Noise Ratio

SSIM

2003

Structural Similarity

MS-SSIM

2004

Multi-Scale Structural Similarity

VIFp

2004

Visual Information Fidelity

FSIM

2011

Feature Similarity Index Measure

IW-PSNR

2011

Information Weighted PSNR

IW-SSIM

2011

Information Weighted SSIM

SR-SIM

2012

Spectral Residual Based Similarity

GMSD

2013

Gradient Magnitude Similarity Deviation

VSI

2014

Visual Saliency-induced Index

-

2016

Content Score

-

2016

Style Score

HaarPSI

2016

Haar Perceptual Similarity Index

MDSI

2016

Mean Deviation Similarity Index

MS-GMSD

2017

Multi-Scale Gradient Magnitude Similiarity Deviation

LPIPS

2018

Learned Perceptual Image Patch Similarity

PieAPP

2018

Perceptual Image-Error Assessment through Pairwise Preference

DISTS

2020

Deep Image Structure and Texture Similarity

No Reference

Acronym

Year

Metric

TV

1937

Total Variation

BRISQUE

2012

Blind/Referenceless Image Spatial Quality Evaluator

Feature based

Acronym

Year

Metric

IS

2016

Inception Score

FID

2017

Frechet Inception Distance

GS

2018

Geometry Score

KID

2018

Kernel Inception Distance

MSID

2019

Multi-Scale Intrinsic Distance

PR

2019

Improved Precision and Recall

Benchmark

As part of our library we provide code to benchmark all metrics on a set of common Mean Opinon Scores databases. Currently only TID2013 and KADID10k are supported. You need to download them separately and provide path to images as an argument to the script.

Here is an example how to evaluate SSIM and MS-SSIM metrics on TID2013 dataset:

python3 tests/results_benchmark.py --dataset tid2013 --metrics SSIM MS-SSIM --path ~/datasets/tid2013 --batch_size 16

We report Spearman’s Rank Correlation cCoefficient (SRCC) and Kendall rank correlation coefficient (KRCC). We do not report Pearson linear correlation coefficient (PLCC) as it’s highly dependent on fitting method and is biased towards simple examples.

For metrics that can take greyscale or colour images, c means chromatic version.

TID2013

KADID10k

Acronym

SRCC / KRCC (PIQ)

SRCC / KRCC

SRCC / KRCC (PIQ)

SRCC / KRCC

PSNR

0.6869 / 0.4958

0.687 0.496 TID2013

0.6757 / 0.4876

- / -

SSIM

0.7201 / 0.5271

0.637 / 0.464 TID2013

0.7242 / 0.5370

0.718 / 0.532 KADID10k

MS-SSIM

0.7983 / 0.5965

0.787 / 0.608 TID2013

0.8020 / 0.6088

0.802 / 0.609 KADID10k

VIFp

0.6102 / 0.4579

0.610 / 0.457 TID2013

0.6500 / 0.4770

0.650 / 0.477 KADID10k

FSIM

0.8015 / 0.6289

0.801 / 0.630 TID2013

0.8294 / 0.6390

0.829 / 0.639 KADID10k

FSIMc

0.8509 / 0.6665

0.851 / 0.667 TID2013

0.8537 / 0.6650

0.854 / 0.665 KADID10k

IW-PSNR

- / -

0.6913 / - Eval2019

- / -

- / -

IW-SSIM

- / -

0.7779 / 0.5977 Eval2019

- / -

- / -

SR-SIM

- / -

0.8076 / 0.6406 Eval2019

- / -

0.839 / 0.652 KADID10k

SR-SIMc

- / -

- / -

- / -

- / -

GMSD

0.8038 / 0.6334

0.8030 / 0.6352 MS-GMSD

0.8474 / 0.6640

0.847 / 0.664 KADID10k

VSI

0.8949 / 0.7159

0.8965 / 0.7183 Eval2019

0.8780 / 0.6899

0.861 / 0.678 KADID10k

Content

0.7049 / 0.5173

- / -

0.7237 / 0.5326

- / -

Style

0.5384 / 0.3720

- / -

0.6470 / 0.4646

- / -

HaarPSI

0.8732 / 0.6923

0.8732 / 0.6923 HaarPSI

0.8849 / 0.6995

0.885 / 0.699 KADID10k

MDSI

0.8899 / 0.7123

0.8899 / 0.7123 MDSI

0.8853 / 0.7023

0.885 / 0.702 KADID10k

MS-GMSD

0.8121 / 0.6455

0.8139 / 0.6467 MS-GMSD

0.8523 / 0.6692

- / -

MS-GMSDc

0.8875 / 0.7105

0.687 / 0.496 MS-GMSD

0.8697 / 0.6831

- / -

LPIPS-VGG

0.6696 / 0.4970

0.670 / 0.497 DISTS

0.7201 / 0.5313

- / -

PieAPP

0.8355 / 0.6495

0.875 / 0.710 DISTS

0.8655 / 0.6758

- / -

DISTS

0.8051 / 0.6133

0.830 / 0.639 DISTS

0.8749 / 0.6947

- / -

Assertions

In PIQ we use assertions to raise meaningful messages when some component doesn’t receive an input of the expected type. This makes prototyping and debugging easier, but it might hurt the performance. To disable all checks, use the Python -O flag: python -O your_script.py

Roadmap

See the open issues for a list of proposed features and known issues.

Contributing

If you would like to help develop this library, you’ll find more information in our contribution guide.

Citation

If you use PIQ in your project, please, cite it as follows.

@misc{piq,
  title={{PyTorch Image Quality}: Metrics and Measure for Image Quality Assessment},
  url={https://github.com/photosynthesis-team/piq},
  note={Open-source software available at https://github.com/photosynthesis-team/piq},
  author={Sergey Kastryulin and Dzhamil Zakirov and Denis Prokopenko},
  year={2019},
}

Contacts

Sergey Kastryulin - @snk4tr - snk4tr@gmail.com

Djamil Zakirov - @zakajd - djamilzak@gmail.com

Denis Prokopenko - @denproc - d.prokopenko@outlook.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

piq-0.5.5-1.tar.gz (80.0 kB view hashes)

Uploaded Source

Built Distribution

piq-0.5.5-1-py3-none-any.whl (111.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page