Skip to main content

A Python package for analyzing image quality.

Project description

Image Quality Checker

PyPI version

This package provides a comprehensive image quality assessment tool that analyzes various aspects of an image to generate a final quality score. It leverages OpenCV, NumPy, SciPy, and scikit-learn for image processing and analysis.

Features

  • Blur Detection: Calculates Laplacian variance to estimate image blurriness. Adapts the blur threshold based on whether the image contains a face (portrait mode).
  • Resolution Analysis: Determines the image resolution (height * width).
  • Brightness and Contrast: Measures the mean (brightness) and standard deviation (contrast) of the grayscale image.
  • Motion Blur Estimation: Analyzes the frequency domain using the Discrete Fourier Transform (DFT) to estimate motion blur.
  • Edge Density: Calculates the density of edges in the image using the Canny edge detector.
  • Color Standard Deviation: Evaluates the standard deviation of color channels in the HSV color space.
  • Entropy: Computes the entropy of the grayscale image histogram, providing a measure of image complexity.
  • Dominant Color Check: Uses K-means clustering to determine if a single color dominates the image (more than 93% of pixels). This helps identify images that are mostly uniform (and likely uninformative).
  • Foreground Cropping: Attempts to crop the image to the main foreground object, improving analysis accuracy by focusing on the relevant region.
  • Small Image Bypass: Images smaller than 3KB are automatically passed (considered "good enough" for quality since processing such small images is often unnecessary/unreliable).
  • Weighted Final Score: Combines all individual metrics into a single, weighted quality score, normalized to a range between 0 and 1. Higher scores indicate better quality.
  • Error Handling: Includes a try-except block to gracefully handle potential errors during image processing.
  • Face detection: Uses Haar cascades to identify the faces in the image.

Detailed Blur Analysis

  • Blur Detection: Utilize the Laplacian variance method. Higher variance indicates sharper regions.

  • Blur Ratio: Calculate the percentage of blurry blocks relative to the total blocks in the image.

  • Blur Classification: Use thresholds on the blur ratio and connected component analysis for "Portrait-like" classification.

  • Blur Map: Generate a grayscale image where blurry regions are white and sharp regions are black.

  • Overall Blur Assessment: Determine if the image is blurry overall based on the blur ratio.

Installation

pip install image-quality-analysis

These dependencies are typically installed automatically with the pip command above.

Usage: Quality Analysis

import cv2
from image_quality_checker import image_quality_score
import image_quality_checker


# Load an image and convert it to bytes
image_path = "path/to/your/image.jpg"  # Replace with the actual path
with open(image_path, "rb") as img_file:
    image_bytes = img_file.read()

# Check image quality
result = image_quality_score(image_bytes)
print("result", result)

# If the image is not good
if result < 0.5:  # You might choose a different threshold
    print("Image quality is low.")
else:
    print("Image quality is acceptable.")

# If the return is "True"
if result is True:
    print("Image is predominantly a single color, very low quality, very small or is corrupt.")
# If exception occurs
if isinstance(result, Exception):
    print(f"An error occurred: {result}")

Usage: Blur Analysis

from image_quality_checker import analyze_blur

image_path = r"E:\Ateeq\dataset\ai\all_in_one\ai\ai(14).jpg"

result = analyze_blur(image_path)

print(f"Blur Ratio: {result['blur_ratio']:.2f}%")
print(f"Category: {result['category']}")
print(f"Overall Blur: {result['overall_blur']}")

Explanation:

  1. Import necessary modules: cv2 for image processing, image_quality_score from the installed package, and image_quality_checker to print the file path.

  2. Load the image: Reads the image file as bytes. This is important because the function is designed to work with byte data (e.g., from a network request or database).

  3. Call image_quality_score: Passes the image bytes to the function, which performs all the quality checks.

  4. Interpret the result:

    • If a numerical score is returned, it represents the quality. You'll need to determine a suitable threshold (e.g., 0.5) based on your application's requirements.
    • If True is returned, the image is either predominantly a single color, very small (<3KB), or the image data is invalid (cannot be decoded).
    • If an Exception is returned, an error occurred during processing.

Function: image_quality_score(image_bytes)

  • Input: image_bytes (bytes): The image data as a byte string.
  • Output:
    • float: A normalized quality score between 0 and 1 (inclusive) if the image can be processed successfully.
    • True: If the image is predominantly a single color, or if the image is too small (<3KB), or image data is invalid.
    • Exception: If any Exception is raised.

Function: crop_foreground(img)

  • Input: img (NumPy array): The input image as a NumPy array (OpenCV image format).
  • Output:
    • NumPy array: The cropped image, focusing on the largest detected foreground object. If no foreground object is found, returns the original image.

Weights and Max Values

The final score is calculated using weighted contributions from individual metrics:

Metric Weight Max Value (for Normalization)
blur_score 0.1 blur_threshold (30 or 60)
resolution 0.1 1000000
brightness 0.1 255
contrast 0.1 255
motion_blur 0.1 100
edge_density 0.25 0.05
color_std_dev 0.15 100
entropy 0.1 8.0

Each metric is normalized by dividing its value by the corresponding max_value (and clamped to 1.0).

Notes

  • The Haar cascade classifier (haarcascade_frontalface_default.xml) is used for face detection and must be available (it's usually included with OpenCV).
  • The choice of thresholds (e.g., blur_threshold, the cut off value for the final_score etc.) might need to be adjusted based on the specific requirements of your application and the types of images you are processing. Experimentation is key.
  • The crop_foreground function uses basic edge detection and contour analysis. For more robust foreground segmentation, consider using more advanced techniques (e.g., GrabCut, deep learning-based segmentation).
  • Consider logging the individual metric values for more detailed analysis and debugging.
  • The package now has a check on the file size and, if it is below 3 KB it's considered valid.
  • Image is resized to 256x256 for consistency in processing.

License

This project is licensed under the Creative Commons Attribution-NoDerivatives (CC BY-ND) License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

image_quality_analysis-0.1.1.tar.gz (7.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

image_quality_analysis-0.1.1-py3-none-any.whl (8.0 kB view details)

Uploaded Python 3

File details

Details for the file image_quality_analysis-0.1.1.tar.gz.

File metadata

  • Download URL: image_quality_analysis-0.1.1.tar.gz
  • Upload date:
  • Size: 7.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.3

File hashes

Hashes for image_quality_analysis-0.1.1.tar.gz
Algorithm Hash digest
SHA256 c52c240c8f21b204ed2900a74df4be07ef7cc1afba769061a3ac1d2b216da279
MD5 d884f42ff94f1a581cf8a4a69652cdd9
BLAKE2b-256 3289a06b11d6d3d872bb998bae209e60357d9257e3a2180ac9201fcdcf83992b

See more details on using hashes here.

File details

Details for the file image_quality_analysis-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for image_quality_analysis-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9927da202c9349d819feaa009c4775baff2cef568ba60f73e624f037b42a6a38
MD5 602b4cbd2a485202b7c49d866f6b4a37
BLAKE2b-256 4d253b53879d5d36b4d260a8336c2cfc82eeae7bfdfca42d40bda187a9f550a0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page