Skip to main content

Ecommerce search relevance evaluation tool

Project description

veritail

PyPI version Python versions License CI

LLM evals framework tailored for ecommerce search.

veritail scores every query-result pair, computes IR metrics from those scores, and runs deterministic quality checks — all in a single command. Run it on every release to track search quality, or compare two configurations side by side to measure the impact of a change before it ships.

Five evaluation layers:

  • LLM-as-a-Judge scoring — every query-result pair scored 0-3 with structured reasoning, using any cloud or local model
  • IR metrics — NDCG, MRR, MAP, Precision, and attribute match computed from LLM scores
  • Deterministic quality checks — low result counts, near-duplicate results, out-of-stock ranking issues, price outliers, and more
  • Autocorrect evaluation — catches intent-altering or unnecessary query corrections
  • Autocomplete evaluation — deterministic checks and LLM-based semantic evaluation for type-ahead suggestions

Includes 14 built-in ecommerce verticals for domain-aware judging, with support for custom vertical context. Optional Langfuse integration for full observability — every judgment, score, and LLM call traced and grouped by evaluation run.

Why veritail

Ecommerce search is one of the hardest systems to evaluate. A "wedding guest dress" query that surfaces white dresses fails a constraint so obvious it goes unstated in human review — but not in veritail's fashion vertical. A "wire for 20-amp circuit" query returning 14 AWG wire isn't a budget option — it's a code violation and a fire hazard. Generic relevance frameworks don't know any of this. Your engineers shouldn't have to encode it from scratch every time.

The typical alternative is human annotation: pull a sample of queries, have someone manually grade the results, average the scores. It's slow, expensive, inconsistent across annotators, and gone the moment your search config changes. You can't run it on every PR. You can't run it at 2am before a deployment.

veritail replaces that workflow. It wires directly into your search API through a thin adapter, sends every query-result pair to an LLM judge that already understands your vertical's domain rules, computes NDCG/MRR/MAP/Precision from those scores, and runs a battery of deterministic checks — all in a single command. You get a full evaluation report in the time it used to take just to assign annotators.

The result: search quality becomes something you can measure on every release, compare across configurations, and track over time — the same way you track latency or error rates. Not a quarterly audit. A continuous signal.

Search relevance evaluation demo

LLM-as-a-Judge scores every query-result pair, computes NDCG/MRR/MAP/Precision, runs deterministic checks, and evaluates autocorrect behavior.

Quick Start

1. Install

pip install veritail                   # OpenAI + local models (default)
pip install veritail[anthropic]        # + Claude support
pip install veritail[gemini]           # + Gemini support
pip install veritail[cloud]            # all three cloud providers
pip install veritail[cloud,langfuse]   # everything

The base install includes the OpenAI SDK because it doubles as the client for OpenAI-compatible local servers (Ollama, vLLM, LM Studio, etc.) — so pip install veritail works with both cloud and local models out of the box.

2. Bootstrap starter files (recommended)

veritail init

This generates:

  • adapter.py with a real HTTP request skeleton for both search() and suggest() (endpoint, auth header, timeout, JSON parsing)
  • queries.csv with example search queries (query types are automatically classified by the LLM during evaluation)
  • prefixes.csv with example prefixes (prefix types are automatically inferred from character count)

By default, existing files are not overwritten. Use --force to overwrite.

3. Create a query set (manual option)

query
red running shoes
wireless earbuds
nike air max 90

Optional columns: type (navigational, broad, long_tail, attribute) and category. When omitted, type is automatically classified by the LLM judge before evaluation.

4. Generate queries with an LLM (alternative)

If you don't have query logs yet, let an LLM generate a starter set:

# From a built-in vertical
veritail generate-queries --vertical electronics --llm-model gpt-4o

# From custom instructions
veritail generate-queries --instructions "B2B industrial fastener distributor" --llm-model gpt-4o

# Both vertical and instructions, custom count and output path
veritail generate-queries \
  --vertical foodservice \
  --instructions "BBQ restaurant equipment supplier" \
  --count 50 \
  --output my_queries.csv \
  --llm-model gpt-4o

This writes a CSV with a single query column. Query types are automatically classified by the LLM during evaluation. Review and edit the generated queries before running an evaluation — the file is designed for human-in-the-loop review.

Cost note: Query generation makes a single LLM call (a fraction of a cent with most cloud models).

5. Create an adapter (manual option)

# my_adapter.py
from veritail import SearchResponse, SearchResult


def search(query: str) -> SearchResponse:
    results = my_search_api.query(query)
    items = [
        SearchResult(
            product_id=r["id"],
            title=r["title"],
            description=r["description"],
            category=r["category"],
            price=r["price"],
            position=i,
            in_stock=r.get("in_stock", True),
            attributes=r.get("attributes", {}),
        )
        for i, r in enumerate(results)
    ]
    return SearchResponse(results=items)
    # To report autocorrect / "did you mean" corrections:
    # return SearchResponse(results=items, corrected_query="corrected text")

Adapters can return either SearchResponse or a bare list[SearchResult] (backward compatible). Use SearchResponse when your search engine returns autocorrect information.

6. Run evaluation

export OPENAI_API_KEY=sk-...

veritail run \
  --queries queries.csv \
  --adapter my_adapter.py \
  --llm-model gpt-4o \
  --top-k 10 \
  --open

For a detailed breakdown of API call volume and cost control options, see LLM Usage & Cost.

Outputs are written under:

eval-results/<generated-or-custom-config-name>/

7. Compare two search configurations

veritail run \
  --queries queries.csv \
  --adapter bm25_search_adapter.py --config-name bm25-baseline \
  --adapter semantic_search_adapter.py --config-name semantic-v2 \
  --llm-model gpt-4o

The comparison report shows metric deltas, overlap, rank correlation, and position shifts.

Vertical Guidance

--vertical injects domain-specific scoring guidance into the judge prompt. Each vertical teaches the LLM judge what matters most in a particular ecommerce domain — the hard constraints, industry jargon, certification requirements, and category-specific nuances that generic relevance scoring would miss.

Scoring guidance is layered — the universal rubric applies to every evaluation, the vertical adds domain rules, and the overlay injects category-specific rules selected per query:

flowchart TD
    A["<b>Rubric</b> — Universal ecommerce search relevance rules applied to every evaluation"]
    B["<b>Vertical</b> — Domain-specific rules for each major retail business domain <i>(e.g. foodservice, home improvement, fashion)</i>"]
    C["<b>Overlay</b> — Category-specific rules selected based on the user query <i>(e.g. tabletop supplies, commercial kitchen equipment)</i>"]

    A --> B --> C

Choose the vertical that best matches the ecommerce site you are evaluating.

Vertical Description Example retailers
automotive Aftermarket, OEM, and remanufactured parts for cars, trucks, and light vehicles RockAuto, AutoZone, FCP Euro
beauty Skincare, cosmetics, haircare, fragrance, and body care Sephora, Ulta Beauty, Dermstore
electronics Consumer electronics and computer components Best Buy, Newegg, B&H Photo
fashion Clothing, shoes, and accessories Nordstrom, ASOS, Zappos
foodservice Commercial kitchen equipment and supplies for restaurants, cafeterias, and catering WebstaurantStore, Katom, TigerChef
furniture Furniture and home furnishings for residential, commercial, and contract use Wayfair, Pottery Barn, IKEA
groceries Online grocery retail covering food, beverages, and household essentials Instacart, Amazon Fresh, FreshDirect
home-improvement Building materials, hardware, plumbing, electrical, and tools for contractors and DIY Home Depot, Lowe's, Menards
industrial Industrial supply and MRO (Maintenance, Repair, and Operations) Grainger, McMaster-Carr, Fastenal
marketplace Multi-seller marketplace platforms Amazon, eBay, Etsy
medical Medical and surgical supplies for hospitals, clinics, and home health Henry Schein, Medline, McKesson
office-supplies Office products, ink/toner, paper, and workspace equipment Staples, Office Depot, W.B. Mason
pet-supplies Pet food, treats, toys, health products, and habitat equipment across all species Chewy, PetSmart, Petco
sporting-goods Athletic equipment, apparel, and accessories across all sports and outdoor activities Dick's Sporting Goods, REI, Academy Sports

You can also provide a custom vertical as a plain text file with --vertical ./my_vertical.txt. Use the built-in verticals in src/veritail/verticals/ as templates.

Use --instructions to layer enterprise-specific rules on top of a vertical — things like brand priorities, certification requirements, or domain jargon unique to your store. See Enterprise Instructions for details.

Examples:

# Built-in vertical
veritail run \
  --queries queries.csv \
  --adapter my_adapter.py \
  --vertical foodservice

# Custom vertical text file
veritail run \
  --queries queries.csv \
  --adapter my_adapter.py \
  --vertical ./my_vertical.txt

# Vertical + enterprise-specific rules
veritail run \
  --queries queries.csv \
  --adapter my_adapter.py \
  --vertical home-improvement \
  --instructions "Pro contractor supplier. Queries for lumber should always prioritize pressure-treated options."

# Vertical + detailed instructions from a file
veritail run \
  --queries queries.csv \
  --adapter my_adapter.py \
  --vertical home-improvement \
  --instructions instructions.txt

More Reports

Evaluate autocomplete suggestions


Autocomplete evaluation demo

Deterministic checks (duplicates, prefix coherence, encoding) and LLM-based semantic scoring for suggestion relevance and diversity.


Side-by-side comparison


Side-by-side comparison demo

Two search configurations compared head-to-head: per-query NDCG deltas, win/loss/tie analysis, rank correlation, and result overlap.


Langfuse observability


Langfuse observability demo

Every judgment, score, and LLM call traced and grouped by evaluation run — with full prompt/response visibility.

Documentation

Guide Description
Evaluation Model LLM judgment scoring, deterministic checks, and IR metrics
Supported LLM Providers Cloud providers, local model servers, and model quality guidance
LLM Usage & Cost API call volume breakdown and cost control strategies
Batch Mode & Resume 50% cost reduction via batch APIs and resuming interrupted runs
Autocorrect Evaluation Evaluating query correction quality
Autocomplete Evaluation Type-ahead suggestion evaluation with checks and LLM scoring
Enterprise Instructions Business-specific evaluation rules
Custom Checks Adding domain-specific deterministic checks
CLI Reference Complete flag reference for all commands
Backends File and Langfuse storage backends
Development Local development setup and running tests
Contributing Contribution workflow and pull request checklist

Disclaimer

veritail uses large language models to generate relevance judgments. LLM outputs can be inaccurate, inconsistent, or misleading. All scores, reasoning, and reports produced by this tool should be reviewed by a qualified human before informing production decisions. veritail is an evaluation aid, not a substitute for human judgment. The authors are not liable for any decisions made based on its output or for any API costs incurred by running evaluations. Users are responsible for complying with the terms of service of any LLM provider they use with this tool. Evaluation data is sent to the configured LLM provider for scoring — use a local model if data must stay on-premise. Adapter modules and custom check modules are loaded and executed as Python code at runtime — only run files you trust. Evaluation results, including product catalog data, are written to disk in plaintext under the output directory (eval-results/ by default) — ensure this directory is excluded from version control and not stored in shared or publicly accessible locations.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

veritail-0.5.1.tar.gz (49.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

veritail-0.5.1-py3-none-any.whl (429.9 kB view details)

Uploaded Python 3

File details

Details for the file veritail-0.5.1.tar.gz.

File metadata

  • Download URL: veritail-0.5.1.tar.gz
  • Upload date:
  • Size: 49.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for veritail-0.5.1.tar.gz
Algorithm Hash digest
SHA256 3ebe4d3ff38623131713e96d15794e77623588cb43e77db04b42637efadfb922
MD5 aac65bfe10d0a0ef7ad3e7c7f906b769
BLAKE2b-256 5e3ff2189f370cafdff07f04bdedb56bb0fa162d48d41016a05b9bbf4e74f9f0

See more details on using hashes here.

File details

Details for the file veritail-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: veritail-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 429.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for veritail-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3731f271a20b7c1942da0cbb7b8679f61dd1d74947dbb033b9aff3b1d8c33485
MD5 e1b29249688b971bf6819d287ee6b18c
BLAKE2b-256 e7f5320a24d4cd6bc70f75cd3ec4adf0fa06d30a29ec7564d342de806565f8a0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page