Skip to main content

Visual Generation Quality Evaluation SDK

Project description

Evalytic

Evals for visual AI. Automated quality evaluation for AI-generated images and video.

PyPI Python License

Know if your AI-generated visuals are good before your users tell you they're not.

pip install evalytic

# Score any image you already have
evaly eval --image output.png --prompt "A sunset over mountains" --yes

# Compare models side by side
evaly bench -m flux-schnell -m flux-dev -m flux-pro \
  -p "A product photo on marble countertop" --yes

What It Does

Evalytic scores AI-generated images using two complementary approaches:

  • VLM Judges (Gemini, GPT, Claude, Ollama) evaluate semantic dimensions like prompt adherence, text rendering, and identity preservation
  • Local Metrics (sharpness, CLIP, LPIPS, ArcFace) run on your machine, free, no API key needed

Use both together or either one alone. Works with any image, from any provider or your own pipeline.

Use Cases

  • Model Selection - Compare models with real prompts, pick the best one for your use case
  • Prompt Optimization - Measure how well models follow your prompts across dimensions
  • Regression Detection - Catch quality drops when models or prompts update
  • CI/CD Quality Gate - Block deploys when image quality falls below threshold
  • 7 Semantic Dimensions - visual_quality, prompt_adherence, text_rendering, input_fidelity, transformation_quality, artifact_detection, identity_preservation
  • Consensus Judging - Multi-judge scoring with automatic agreement analysis

Quickstart

1. Install

pip install evalytic

2. See Real Examples (no API key needed)

evaly demo              # Opens showcase with 4 real benchmark case studies
evaly demo face         # Face identity preservation comparison
evaly demo flagship     # Flux Schnell vs Dev vs Pro cost/quality

3. Score an Existing Image

# Local metrics only (free, no API key)
evaly eval --image output.png --prompt "A sunset over mountains" --no-judge

# With VLM judge
export GEMINI_API_KEY=your_gemini_key
evaly eval --image output.png --prompt "A sunset over mountains" --yes

4. Benchmark Models

export FAL_KEY=your_fal_key

# Text-to-image
evaly bench -m flux-schnell -m flux-dev -m flux-pro \
  -p "A cat sitting on a windowsill" --yes

# Image-to-image
evaly bench -m flux-kontext -m seedream-edit -m reve-edit \
  --inputs product.jpg -p "Place on a marble countertop" --yes

# Metrics only, no VLM judge
evaly bench -m flux-schnell -m flux-dev -p "A cat" --no-judge

5. Interactive Setup

evaly init   # Guided setup: use case, API keys, config file

CLI Commands

Command Description
evaly init Interactive setup wizard
evaly demo Browse real benchmark showcases (no API key needed)
evaly bench Generate, score, and report in one command
evaly eval Score a single image without generation
evaly gate CI/CD quality gate with pass/fail exit codes

Judges

Any VLM that can analyze images works as a judge:

evaly bench -m flux-schnell -p "A cat" -j gemini-2.5-flash            # Default
evaly bench -m flux-schnell -p "A cat" -j openai/gpt-5.2              # OpenAI
evaly bench -m flux-schnell -p "A cat" -j anthropic/claude-sonnet-4-6 # Anthropic
evaly bench -m flux-schnell -p "A cat" -j fal/gemini-2.5-flash        # Via fal.ai (single key)
evaly bench -m flux-schnell -p "A cat" -j ollama/qwen2.5-vl:7b        # Local

Consensus Mode

Use multiple judges for more reliable scores:

evaly bench -m flux-schnell -p "A cat" \
  --judges "gemini-2.5-flash,openai/gpt-5.2"

Two judges score in parallel. If they disagree, a third breaks the tie.

Optional Extras

pip install "evalytic[metrics]"  # CLIP Score + LPIPS + ArcFace (~2GB)
pip install "evalytic[all]"      # Everything

Configuration

Create evalytic.toml in your project root:

[keys]
fal = "your_fal_key"
gemini = "your_gemini_key"

[bench]
judge = "gemini-2.5-flash"
dimensions = ["visual_quality", "prompt_adherence"]
concurrency = 4

[bench.dimension_weights]
input_fidelity = 0.5
visual_quality = 0.1

Documentation

Full docs at docs.evalytic.ai

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evalytic-0.3.11.tar.gz (3.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evalytic-0.3.11-py3-none-any.whl (131.0 kB view details)

Uploaded Python 3

File details

Details for the file evalytic-0.3.11.tar.gz.

File metadata

  • Download URL: evalytic-0.3.11.tar.gz
  • Upload date:
  • Size: 3.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for evalytic-0.3.11.tar.gz
Algorithm Hash digest
SHA256 0b946217e31f396934a4b7e9fe0dda416e2749f402e65d2a7fa15eefe904fd83
MD5 9bd6ccf622231e536b168c361c560782
BLAKE2b-256 bd5b9d48158b706f9d37203c85e229b81e06ef4139aab0576b410e06d5eb51e8

See more details on using hashes here.

File details

Details for the file evalytic-0.3.11-py3-none-any.whl.

File metadata

  • Download URL: evalytic-0.3.11-py3-none-any.whl
  • Upload date:
  • Size: 131.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for evalytic-0.3.11-py3-none-any.whl
Algorithm Hash digest
SHA256 875a78d98e7bb299d2c05a9f071db679d3a036a83e312d1ef2f1aad7d5d3b259
MD5 b0973819844b40701828944dad88fe69
BLAKE2b-256 9934cfbc9d9f83819b4091ee95d1ff30de90063cf607a8da7110b95d40ca4bba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page