Skip to main content

Visual Generation Quality Evaluation SDK

Project description

Evalytic

Evals for visual AI. Automated quality evaluation for AI-generated images and video.

PyPI Python License

Know if your AI-generated visuals are good before your users tell you they're not.

pip install evalytic

# Score any image you already have
evaly eval --image output.png --prompt "A sunset over mountains" --yes

# Compare models side by side
evaly bench -m flux-schnell -m flux-dev -m flux-pro \
  -p "A product photo on marble countertop" --yes

What It Does

Evalytic scores AI-generated images using two complementary approaches:

  • VLM Judges (Gemini, GPT, Claude, Ollama) evaluate semantic dimensions like prompt adherence, text rendering, and identity preservation
  • Local Metrics (sharpness, CLIP, LPIPS, ArcFace) run on your machine, free, no API key needed

Use both together or either one alone. Works with any image, from any provider or your own pipeline.

Use Cases

  • Model Selection - Compare models with real prompts, pick the best one for your use case
  • Prompt Optimization - Measure how well models follow your prompts across dimensions
  • Regression Detection - Catch quality drops when models or prompts update
  • CI/CD Quality Gate - Block deploys when image quality falls below threshold
  • 7 Semantic Dimensions - visual_quality, prompt_adherence, text_rendering, input_fidelity, transformation_quality, artifact_detection, identity_preservation
  • Consensus Judging - Multi-judge scoring with automatic agreement analysis

Quickstart

1. Install

pip install evalytic

2. See Real Examples (no API key needed)

evaly demo              # Opens showcase with 4 real benchmark case studies
evaly demo face         # Face identity preservation comparison
evaly demo flagship     # Flux Schnell vs Dev vs Pro cost/quality

3. Score an Existing Image

# Local metrics only (free, no API key)
evaly eval --image output.png --prompt "A sunset over mountains" --no-judge

# With VLM judge
export GEMINI_API_KEY=your_gemini_key
evaly eval --image output.png --prompt "A sunset over mountains" --yes

4. Benchmark Models

export FAL_KEY=your_fal_key

# Text-to-image
evaly bench -m flux-schnell -m flux-dev -m flux-pro \
  -p "A cat sitting on a windowsill" --yes

# Image-to-image
evaly bench -m flux-kontext -m seedream-edit -m reve-edit \
  --inputs product.jpg -p "Place on a marble countertop" --yes

# Metrics only, no VLM judge
evaly bench -m flux-schnell -m flux-dev -p "A cat" --no-judge

5. Interactive Setup

evaly init   # Guided setup: use case, API keys, config file

CLI Commands

Command Description
evaly init Interactive setup wizard
evaly demo Browse real benchmark showcases (no API key needed)
evaly bench Generate, score, and report in one command
evaly eval Score a single image without generation
evaly gate CI/CD quality gate with pass/fail exit codes

Judges

Any VLM that can analyze images works as a judge:

evaly bench -m flux-schnell -p "A cat" -j gemini-2.5-flash            # Default
evaly bench -m flux-schnell -p "A cat" -j openai/gpt-5.2              # OpenAI
evaly bench -m flux-schnell -p "A cat" -j anthropic/claude-sonnet-4-6 # Anthropic
evaly bench -m flux-schnell -p "A cat" -j fal/gemini-2.5-flash        # Via fal.ai (single key)
evaly bench -m flux-schnell -p "A cat" -j ollama/qwen2.5-vl:7b        # Local

Consensus Mode

Use multiple judges for more reliable scores:

evaly bench -m flux-schnell -p "A cat" \
  --judges "gemini-2.5-flash,openai/gpt-5.2"

Two judges score in parallel. If they disagree, a third breaks the tie.

Optional Extras

pip install "evalytic[metrics]"  # CLIP Score + LPIPS + ArcFace (~2GB)
pip install "evalytic[all]"      # Everything

Configuration

Create evalytic.toml in your project root:

[keys]
fal = "your_fal_key"
gemini = "your_gemini_key"

[bench]
judge = "gemini-2.5-flash"
dimensions = ["visual_quality", "prompt_adherence"]
concurrency = 4

[bench.dimension_weights]
input_fidelity = 0.5
visual_quality = 0.1

Documentation

Full docs at docs.evalytic.ai

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evalytic-0.3.12.tar.gz (3.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evalytic-0.3.12-py3-none-any.whl (130.8 kB view details)

Uploaded Python 3

File details

Details for the file evalytic-0.3.12.tar.gz.

File metadata

  • Download URL: evalytic-0.3.12.tar.gz
  • Upload date:
  • Size: 3.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for evalytic-0.3.12.tar.gz
Algorithm Hash digest
SHA256 e55931032c062d24dd931415db1900f470d712c6504ae6cb27cdbbe95768b338
MD5 b9e3b6600846222fe109b3d6417e0c62
BLAKE2b-256 069bd578f41a2d7c513e8e807a3fee4c106a0168adcf83b6e40bba63dd4b925a

See more details on using hashes here.

File details

Details for the file evalytic-0.3.12-py3-none-any.whl.

File metadata

  • Download URL: evalytic-0.3.12-py3-none-any.whl
  • Upload date:
  • Size: 130.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for evalytic-0.3.12-py3-none-any.whl
Algorithm Hash digest
SHA256 09ab6813ab15baa09800d9c6ad9f55e30c490b2265654631d8ba45e8db210dba
MD5 a37a4b63948aa4235ba464a35ef62c99
BLAKE2b-256 e3a20d874b8d14eb7388d4e0f4112bcb756cb86101d53b8b68152b5db3aba6ee

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page