Skip to main content

A Python package for volumetric image quality assessment.

Reason this release was yanked:

Import not working due to __init__.py files for C extensions

Project description

vIQA — volumetric Image Quality Assessment


Project Status: Active – The project has reached a stable, usable state and is being actively developed. PyPI - Version PyPI - Python Version PyPI - License PyPI - Downloads Documentation GH Action Build GH Action pre-commit Ruff Binder Contributor Covenant

Table of Contents

vIQA provides an extensive assessment suite for image quality of 2D-images or 3D-volumes as a python package. Image Quality Assessment (IQA) is a field of research that aims to quantify the quality of an image. This is usually done by comparing the image to a reference image (full-reference metrics), but can also be done by evaluating the image without a reference (no-reference metrics). The reference image is usually the original image, but can also be another image that is considered to be of high quality. The comparison is done by calculating a metric that quantifies the difference between the two images or for the image itself. These quality metrics are used in various fields, such as medical imaging, computer vision, and image processing. For example the efficiency of image compression algorithms can be evaluated by comparing the compressed image to the original image. This package implements several metrics to compare two images or volumes using different IQA metrics. In addition, some metrics are implemented that can be used to evaluate a single image or volume.

The metrics used are:

  • Peak Signal to Noise Ratio (PSNR)
  • Root Mean Square Error (RMSE)
  • Universal Quality Index (UQI) [^1]
  • Structured Similarity (SSIM) [^2]
  • Multi-Scale Structural Similarity (MS-SSIM) [^3]
  • Feature Similarity Index (FSIM) [^4]
  • Visual Information Fidelity in pixel domain (VIFp) [^5]

[!CAUTION] The calculated values for VIFp are probably not correct in this implementation. Those values should be treated with caution as further testing is required.

  • Visual Saliency Index (VSI) [^6]

[!WARNING] The original metric supports RGB images only. This implementation can work with grayscale images by copying the luminance channel 3 times.

  • Most Apparent Distortion (MAD) [^7]
  • Gradient Similarity Measure (GSM) [^8]

[!CAUTION] This metric is not yet tested. The metric should be only used for experimental purposes.

  • Contrast to Noise Ratio (CNR) [^9]
  • Signal to Noise Ratio (SNR)
  • Q-Measure [^10]

Overview

Metric Name Type Dimensional behaviour Colour Behaviour Range (different/worst - identical/best) Tested Validated Reference
PSNR Peak Signal to Noise Ratio FR 3D native :heavy_check_mark: $[0, \infty)$ :heavy_check_mark: :heavy_check_mark:
RMSE Root Mean Square Error FR 3D native :heavy_check_mark: $(\infty, 0]$ :heavy_check_mark: :heavy_check_mark:
UQI [^a] Universal Quality Index FR 3D native ( :heavy_check_mark: ) [^b] $[-1, 1]$ :x: ( :heavy_check_mark: ) [^c] [^1]
SSIM Structured Similarity FR 3D native ( :heavy_check_mark: ) [^b] $[-1, 1]$ [^d] :heavy_check_mark: :heavy_check_mark: [^2]
MS-SSIM Multi-Scale Structural Similarity FR 3D slicing :question: $[0, 1]$ :x: :heavy_check_mark: [^3]
FSIM Feature Similarity Index FR 3D slicing :heavy_check_mark: $[0, 1]$ :heavy_check_mark: :heavy_check_mark: [^4]
VIFp Visual Information Fidelity in pixel domain FR 3D slicing :question: $[0, \infty)$ [^e] :x: :x: [^5]
VSI Visual Saliency Index FR 3D slicing :heavy_check_mark: [^f] $[0, 1]$ :x: :x: [^6]
MAD Most Apparent Distortion FR 3D slicing $[0, \infty)$ :heavy_check_mark: :x: [^7]
GSM Gradient Similarity FR 3D native or slicing $[0, 1]$ :x: :x: [^8]
CNR Contrast to Noise Ratio NR 3D native $[0, \infty)$ :heavy_check_mark: :x: [^9]
SNR Signal to Noise Ratio NR 3D native :heavy_check_mark: $[0, \infty)$ :heavy_check_mark: :x:
Q-Measure Q-Measure NR 3D only [^g] :x: $[0, \infty)$ :x: :x: [^10]

[^a]: UQI is a special case of SSIM. Also see [^2]. [^b]: The metric is calculated channel-wise for color images. The values are then averaged after weighting. [^c]: As UQI is a special case of SSIM, the validation of SSIM is also valid for UQI. [^d]: The range for SSIM is given as $[-1, 1]$, but is usually $[0, 1]$ in practice. [^e]: Normally $[0, 1]$, but can be higher than 1 for modified images with higher contrast than reference images. [^f]: The original metric supports RGB images only. This implementation can work with grayscale images by copying the luminance channel 3 times. [^g]: The Q-Measure is a special metric designed for CT images. Therefore it only works with 3D volumes.

Documentation

The API documentation can be found here.

Requirements

The following packages have to be installed:

  • matplotlib
  • nibabel
  • numpy
  • piq
  • pytorch
  • scikit-image
  • scipy
  • tqdm
  • (jupyter) if you want to use the provided notebook

Installation

Use either pip

pip install viqa

or conda

conda install -c conda-forge viqa

[!IMPORTANT] The package is currently in development and not yet available on conda-forge.

Usage

Workflow

Images are first loaded from .raw files or .mhd files and their corresponding .raw file, normalized to the chosen data range (if the parameter normalize=True is set) and then compared. The scores are then calculated and can be printed. If using paths file names need to be given with the bit depth denoted as a suffix (e.g. _8bit.raw, _16bit.mhd) and the dimensions of the images need to be given in the file name (e.g. 512x512x512). The images are assumed to be grayscale. Treatment of color images is planned for later versions. The metrics are implemented to calculate the scores for an 8-bit data range (0-255) per default. For some metrics the resulting score is different for different data ranges. When calculating several metrics for the same image, the same data range should be used for all metrics. The data range can be changed by setting the parameter data_range for each metric. This parameter primarily affects the loading behaviour of the class instances when not using the vIQA.utils.load_data function directly as described further below, but for some metrics setting the data range is necessary to calculate the score (e.g. PSNR).

Examples

Better:

import viqa
from viqa import load_data, normalize_data

## load images
file_path_img_r = 'path/to/reference_image_8bit_512x512x512.raw'
file_path_img_m = 'path/to/modified_image_8bit_512x512x512.raw'
img_r = load_data(
  file_path_img_r,
  data_range=1,
  normalize=False,
)  # data_range ignored due to normalize=False
img_m = load_data(file_path_img_m)  # per default: normalize=False
# --> both images are loaded as 8-bit images

# calculate and print RMSE score
rmse = viqa.RMSE()
score_rmse = rmse.score(img_r, img_m)  # RMSE does not need any parameters
rmse.print_score(decimals=2)

# normalize to 16-bit
img_r = normalize_data(img_r, data_range_output=(0, 65535))
img_m = load_data(img_m, data_range=65535, normalize=True)
# --> both functions have the same effect

# calculate and print PSNR score
psnr = viqa.PSNR(data_range=65535)  # PSNR needs data_range to calculate the score
score_psnr = psnr.score(img_r, img_m)
psnr.print_score(decimals=2)

# set optional parameters for MAD as dict
calc_parameters = {
    'block_size': 16,
    'block_overlap': 0.75,
    'beta_1': 0.467,
    'beta_2': 0.130,
    'luminance_function': {'b': 0, 'k': 0.02874, 'gamma': 2.2},
    'orientations_num': 4,
    'scales_num': 5,
    'weights': [0.5, 0.75, 1, 5, 6]
}

# calculate and print MAD score
mad = viqa.MAD(data_range=65535)  # MAD needs data_range to calculate the score
score_mad = mad.score(img_r, img_m, dim=2, **calc_parameters)
mad.print_score(decimals=2)

Possible, but worse (recommended only if you want to calculate a single metric):

import viqa

file_path_img_r = 'path/to/reference_image_512x512x512_16bit.raw'
file_path_img_m = 'path/to/modified_image_512x512x512_16bit.raw'

load_parameters = {'data_range': 1, 'normalize': True}
# data_range is set to 1 to normalize the images
# to 0-1 and for calculation, if not set 255 would
# be used as default for loading and calculating
# the score

psnr = viqa.PSNR(**load_parameters)  # load_parameters necessary due to direct loading by class
# also PSNR needs data_range to calculate the score
# if images would not be normalized, data_range should be
# 65535 for 16-bit images for correct calculation
score = psnr.score(file_path_img_r, file_path_img_m)
# --> images are loaded as 16-bit images and normalized to 0-1 via the `load_data` function
#     called by the score method
psnr.print_score(decimals=2)

[!TIP] It is recommended to load the images directly with the vIQA.utils.load_data function first and then pass the image arrays to the metrics functions. You can also pass the image paths directly to the metrics functions. In this case, the images will be loaded with the given parameters. This workflow is only recommended if you want to calculate a single metric.

[!IMPORTANT] The current recommended usage files are: Image_Comparison.ipynb and Image_comparison_batch.ipynb.

For more examples, see the provided Jupyter notebooks and the documentation under API Reference.

TODO

  • Add metrics
    • Add SFF/IFS
    • Add Ma
    • Add PI
    • Add NIQE
  • Add tests
    • Add tests for RMSE
    • Add tests for PSNR
    • Add tests for SSIM
    • Add tests for MSSSIM
    • Add tests for FSIM
    • Add tests for VSI
    • Add tests for VIF
    • Add tests for MAD
    • Add tests for GSM
    • Add tests for CNR
    • Add tests for SNR
    • Add tests for Q-Measure
  • Add support for different data ranges
  • Validate metrics
  • Add color image support
  • Add support for printing values
    • Add support for .txt files
    • Add support for .csv files
  • Add support for fusions
    • Add support for linear combination
    • Add support for decision fusion

Contributing

See CONTRIBUTING.md for information on how to contribute to the project and development guide for further information.

License

BSD 3-Clause

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Contacts

Lukas Behammer, lukas.behammer@fh-wels.at

References

[^1]: Wang, Z., & Bovik, A. C. (2002). A Universal Image Quality Index. IEEE SIGNAL PROCESSING LETTERS, 9(3). https://doi.org/10.1109/97.995823 [^2]: Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612. https://doi.org/10.1109/TIP.2003.819861 [^3]: Wang, Z., Simoncelli, E. P., & Bovik, A. C. (2003). Multi-scale structural similarity for image quality assessment. The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 1298–1402. https://doi.org/10.1109/ACSSC.2003.1292216 [^4]: Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8). https://doi.org/10.1109/TIP.2011.2109730 [^5]: Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on Image Processing, 15(2), 430–444. https://doi.org/10.1109/TIP.2005.859378 [^6]: Zhang, L., Shen, Y., & Li, H. (2014). VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image Processing, 23(10), 4270–4281. https://doi.org/10.1109/TIP.2014.2346028 [^7]: Larson, E. C., & Chandler, D. M. (2010). Most apparent distortion: full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 19 (1), 011006. https://doi.org/10.1117/1.3267105 [^8]: Liu, A., Lin, W., & Narwaria, M. (2012). Image quality assessment based on gradient similarity. IEEE Transactions on Image Processing, 21(4), 1500–1512. https://doi.org/10.1109/TIP.2011.2175935 [^9]: Desai, N., Singh, A., & Valentino, D. J. (2010). Practical evaluation of image quality in computed radiographic (CR) imaging systems. Medical Imaging 2010: Physics of Medical Imaging, 7622, 76224Q. https://doi.org/10.1117/12.844640 [^10]: Reiter, M., Weiß, D., Gusenbauer, C., Erler, M., Kuhn, C., Kasperl, S., & Kastner, J. (2014). Evaluation of a Histogram-based Image Quality Measure for X-ray computed Tomography. 5th Conference on Industrial Computed Tomography (iCT) 2014, 25-28 February 2014, Wels, Austria. e-Journal of Nondestructive Testing Vol. 19(6). https://www.ndt.net/?id=15715

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

viqa-1.5.2.tar.gz (113.9 kB view hashes)

Uploaded Source

Built Distributions

viqa-1.5.2-cp313-cp313-musllinux_1_2_x86_64.whl (1.1 MB view hashes)

Uploaded CPython 3.13 musllinux: musl 1.2+ x86-64

viqa-1.5.2-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (245.8 kB view hashes)

Uploaded CPython 3.13 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

viqa-1.5.2-cp313-cp313-macosx_13_3_arm64.whl (123.7 kB view hashes)

Uploaded CPython 3.13 macOS 13.3+ ARM64

viqa-1.5.2-cp312-cp312-win_amd64.whl (185.5 kB view hashes)

Uploaded CPython 3.12 Windows x86-64

viqa-1.5.2-cp312-cp312-musllinux_1_2_x86_64.whl (1.1 MB view hashes)

Uploaded CPython 3.12 musllinux: musl 1.2+ x86-64

viqa-1.5.2-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (245.8 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

viqa-1.5.2-cp312-cp312-macosx_13_3_x86_64.whl (129.9 kB view hashes)

Uploaded CPython 3.12 macOS 13.3+ x86-64

viqa-1.5.2-cp312-cp312-macosx_13_3_arm64.whl (123.7 kB view hashes)

Uploaded CPython 3.12 macOS 13.3+ ARM64

viqa-1.5.2-cp311-cp311-win_amd64.whl (185.5 kB view hashes)

Uploaded CPython 3.11 Windows x86-64

viqa-1.5.2-cp311-cp311-musllinux_1_2_x86_64.whl (1.1 MB view hashes)

Uploaded CPython 3.11 musllinux: musl 1.2+ x86-64

viqa-1.5.2-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (245.8 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.27+ x86-64 manylinux: glibc 2.28+ x86-64

viqa-1.5.2-cp311-cp311-macosx_13_3_x86_64.whl (129.9 kB view hashes)

Uploaded CPython 3.11 macOS 13.3+ x86-64

viqa-1.5.2-cp311-cp311-macosx_13_3_arm64.whl (123.7 kB view hashes)

Uploaded CPython 3.11 macOS 13.3+ ARM64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page