Skip to main content

A framework for testing LLM performance using pydanticAI

Project description

LLM Tester

A powerful Python framework for benchmarking, comparing, and optimizing various LLM providers through structured data extraction tasks. It also serves as a bridge to easily integrate LLM functionalities into your applications using Pydantic models for structured data extraction.

Purpose

LLM Tester helps you:

  1. Evaluate LLMs: Objectively measure how accurately different LLMs extract structured data.
  2. Optimize Prompts: Refine prompts to improve extraction accuracy.
  3. Analyze Costs: Track token usage and costs across providers.
  4. Integrate LLMs: Easily add structured data extraction capabilities to your Python applications.

The framework provides a consistent way to interact with various LLM providers and evaluate their performance on your specific data extraction needs.

Architecture

LLM Tester features a flexible, pluggable architecture for integrating with LLM providers. It supports native API integrations (including OpenAI, Anthropic, Mistral, Google, and OpenRouter), PydanticAI integration, and mock implementations for testing.

For more details on the architecture, see the documentation.

Features

  • Benchmark and compare multiple LLM providers.
  • Validate responses against Pydantic models.
  • Calculate extraction accuracy.
  • Optimize prompts for better results.
  • Generate detailed test reports.
  • Manage configuration centrally.
  • Use mock providers for testing without API keys.
  • Track token usage and costs.
  • Easily integrate structured data extraction into your applications.
  • Query model prices: Compare pricing across providers and models to make cost-effective decisions.
  • Configure models: Add, edit, and manage LLM models for each provider.
  • Update cost information: Keep pricing data current with OpenRouter API integration.
  • File Upload Support: Process and test LLMs with file inputs (e.g., images) for multimodal analysis. See the guide on Using Files for more details.

A word about word models

Unfortunately things can get little confusing with the word model, so I've opted to use py_models and llm_models as the terms.

  • py_models: Refers to the Pydantic models used for structured data extraction.
  • llm_models: Refers to the LLM models provided by various providers (e.g., OpenAI, Anthropic).

Example Pydantic Models

LLM Tester includes example models for common extraction tasks:

  1. Job Advertisements: Extract structured job information.
  2. Product Descriptions: Extract product details.

You can easily add your own custom models for specific tasks. See the documentation for details.

Installation

You can install llm-tester from PyPI or by cloning the repository.

Installing from PyPI

pip install pydantic-llm-tester

Installing from Source

# Clone the repository
# git clone https://github.com/yourusername/llm-tester.git # Replace with actual repo URL
cd llm-tester

# Create and activate virtual environment (recommended)
python3 -m venv venv
source venv/bin/activate

# Install in editable mode
pip install -e .

Configuration

After installation, configure your API keys:

# Make sure your virtual environment is activated
source venv/bin/activate

# Configure API Keys (Interactive)
llm-tester configure keys
# This will prompt for missing keys found in provider configs and offer to save them to the default .env path (e.g., src/pydantic_llm_tester/.env or project root).

Make sure your API keys are set in the .env file (at the default location or specified via --env) or as environment variables. The llm-tester configure keys command helps manage this.

Usage

LLM Tester can be used via the command-line interface (CLI) or as a Python library in your applications.

CLI Usage

The primary way to use the tool is via the llm-tester command after activating your virtual environment.

# Make sure the virtual environment is activated
source venv/bin/activate

# Show help and available commands
llm-tester --help

For detailed CLI command references, see the documentation.

Key CLI commands:

  • Scaffolding: Quickly set up new providers and models.

    llm-tester scaffold --help
    

    It is recommended to start by scaffolding a new model or provider.

  • Model Prices: Query and display pricing information for LLM models.

    llm-tester prices --help
    

    See Prices Documentation for details.

  • Model Configuration: Manage LLM models for providers (add, edit, remove, list).

    llm-tester models --help
    

    See Models Documentation for details.

  • Cost Management: Update and manage model costs from OpenRouter API.

    llm-tester costs --help
    

    See Costs Documentation for details.

  • Running Tests: Execute tests against configured providers.

    llm-tester run --help
    
  • Example: Run a particular test that uses a file with a full debugging: Testing specific file

    llm-tester -vv run -p openai -m job_ads -f job_ad_from_image --llm_models openai:gpt-4
    
  • Configuration: Manage API keys and provider settings.

    llm-tester configure --help
    
  • Listing: List available models, providers, and test cases.

    llm-tester list --help
    
  • Providers: Enable/disable providers and manage their models.

    llm-tester providers --help
    
  • Schemas: List available extraction schemas.

    llm-tester schemas --help
    
  • Recommendations: Get LLM-assisted model recommendations.

    llm-tester recommend-model --help
    
  • Interactive Mode: Launch a menu-driven session.

    llm-tester interactive
    

Python API Usage

You can integrate LLM Tester into your Python applications. See the documentation for detailed API usage.

from pydantic_llm_tester import LLMTester # For using the installed package
# Or, if running from source for development:
# from pydantic_llm_tester.llm_tester import LLMTester

# Example: Using LLM Tester as a bridge for structured data extraction
# Initialize tester with providers and your custom py_models directory
tester = LLMTester(providers=["openai"], test_dir="/path/to/your/custom/py_models")

# Assuming you have a model named 'my_task' in your custom py_models directory
# and a test case named 'example' with source and prompt files.

# You can directly run a specific test case by name
# This requires knowing the test case ID (module_name/test_case_name)
# In this example, let's assume a test case 'my_task/example' exists.
# You would typically discover test cases first:
# test_cases = tester.discover_test_cases(modules=["my_task"])
# example_test_case = next((tc for tc in test_cases if tc['name'] == 'example'), None)

# For a simple "Hello World" style example using an external model:
# 1. Scaffold a new model: llm-tester scaffold model --interactive (e.g., name it 'hello_world')
# 2. Update the generated model.py to define a simple schema (e.g., just a 'greeting' field).
# 3. Update the generated tests/sources/example.txt and tests/prompts/example.txt
#    Source: "Hello, world!"
#    Prompt: "Extract the greeting from the text."
#    Expected: {"greeting": "Hello, world!"}
# 4. Run the test using the CLI: llm-tester run --test-dir /path/to/your/custom/py_models --providers mock --py_models mock:mock-model --filter hello_world/example
#    (Using mock provider for simplicity, replace with real provider if configured)

# Programmatic "Hello World" example (assuming the 'hello_world' model is set up as above)
# Define a simple model class directly for demonstration (or import from your external model file)
from pydantic import BaseModel


class HelloWorldModel(BaseModel):
    greeting: str


# Define a simple test case structure
hello_world_test_case = {
    'module': 'hello_world',
    'name': 'example',
    'model_class': HelloWorldModel,
    'source_path': '/path/to/your/custom/py_models/hello_world/tests/sources/example.txt',  # Replace with actual path
    'prompt_path': '/path/to/your/custom/py_models/hello_world/tests/prompts/example.txt',  # Replace with actual path
    'expected_path': '/path/to/your/custom/py_models/hello_world/tests/expected/example.json'  # Replace with actual path
}

# Initialize tester with a provider (e.g., mock for this example) and the directory containing your model
tester = LLMTester(providers=["mock"], test_dir="/path/to/your/custom/py_models")  # Replace path and provider as needed

# Run the specific test case
results = tester.run_test(hello_world_test_case)

# Process and print the result
print("\nHello World Test Result:")
for provider, result in results.items():
    print(f"Provider: {provider}")
    if "error" in result:
        print(f"  Error: {result['error']}")
    else:
        print(f"  Success: {result.get('validation', {}).get('success')}")
        print(f"  Extracted Data: {result.get('extracted_data')}")
        print(f"  Accuracy: {result.get('validation', {}).get('accuracy'):.2f}%")

Testing

LLM Tester includes a test suite using pytest to ensure the framework's functionality and stability.

To run the tests:

# Make sure your virtual environment is activated
source venv/bin/activate

# Run all tests
pytest

# Run tests for a specific module (e.g., CLI commands)
pytest tests/cli/

For more details on testing, see the documentation. (Note: A dedicated testing guide is planned).

Provider System

LLM Tester uses a pluggable provider system. See the documentation for architectural details and the guide for adding new providers.

Adding New Models

You can easily add new extraction models using the llm-tester scaffold model command or by following the manual steps. See the documentation for details.

General implementation notes

This package is written initially using Claude Code, using only minimum manual intervention and edits. Further improvements are made with Cline, using Gemini 2.5 and other models. LLM generated code is reviewed and tested by the author and all of the architectural decisions are mine.

License

MIT


© 2025 Timo Railo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydantic_llm_tester-0.1.20.tar.gz (171.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pydantic_llm_tester-0.1.20-py3-none-any.whl (194.8 kB view details)

Uploaded Python 3

File details

Details for the file pydantic_llm_tester-0.1.20.tar.gz.

File metadata

  • Download URL: pydantic_llm_tester-0.1.20.tar.gz
  • Upload date:
  • Size: 171.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for pydantic_llm_tester-0.1.20.tar.gz
Algorithm Hash digest
SHA256 d421a7984a475ac5c7d2a569b5a361edb3c65338c908158b158c0063b85b758a
MD5 e4c098388c0845b19884c9361d96d9b0
BLAKE2b-256 2b7a142d39c0a75eaa7e4185399a791fe9f689e7d1d40db480919da99737f3a3

See more details on using hashes here.

File details

Details for the file pydantic_llm_tester-0.1.20-py3-none-any.whl.

File metadata

File hashes

Hashes for pydantic_llm_tester-0.1.20-py3-none-any.whl
Algorithm Hash digest
SHA256 b7f14db43718b77d417b33943330099d28a39d16b89ef454755beef28eccc619
MD5 88dc5cb74aa79c4cef4c2eadd41d3bc5
BLAKE2b-256 c6955732e0e1b4fca08c8d5f80827e8efcc723ee237c9d91a61914e4d9b071b4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page