LLM call optimization across various providers (online and local models)
Project description
Valtron Core
A Python framework for evaluating and optimizing LLM calls across any provider.
Proudly built and backed by InferLink.
Website | Documentation | Quick Start | Examples | Contributing
Valtron lets you run the same task across multiple LLMs simultaneously, then compare them on accuracy, cost, and speed. Define a prompt, a labeled dataset, and the models you want to test — Valtron evaluates each one, scores the results, and produces an interactive HTML/PDF report with an AI recommendation.
Features
- Multi-model comparison — run GPT-4o, Claude, Gemini, Llama, and any LiteLLM-compatible model side-by-side on the same dataset
- Detailed evaluation — label classification (string match) and structured extraction (nested JSON with field-level precision/recall/F1)
- Prompt optimization — seven built-in strategies (few-shot generation, chain-of-thought, decomposition, hallucination filtering, and more) applied per model
- Cost and latency tracking — per-document and aggregate cost, response time, and accuracy metrics
- HTML and PDF reports — interactive charts, per-document breakdowns, and an AI-generated recommendation
- Transformer training — train a local DistilBERT classifier from your evaluation data for zero-cost inference
- Self-hosted model support — Ollama, vLLM, LM Studio, and more for free, private inference
Quick Start
Prerequisites
- Python 3.12+
- Poetry for dependency management
- Docker (optional)
Install
pip install valtron-core
Or with Poetry:
poetry install
Copy .env.example to .env and add at least one provider key:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
Run an evaluation
from valtron_core.recipes import ModelEval
data = [
{"id": "1", "content": "Fast shipping, exactly what I ordered.", "label": "positive"},
{"id": "2", "content": "Wrong item sent. Refund process was painful.", "label": "negative"},
{"id": "3", "content": "Average experience, nothing special.", "label": "neutral"},
]
config = {
"models": [
{"name": "gpt-4o-mini"},
{"name": "claude-haiku-4-5-20251001"},
],
"prompt": "Classify the sentiment of the following review as positive, negative, or neutral.\n\nReview: {document}\n\nSentiment:",
"output_dir": "./results",
}
experiment = ModelEval(config=config, data=data)
report_path = experiment.run()
print(f"Report: {report_path}")
Open ./results/evaluation_report.html to see accuracy, cost, latency, and a recommendation.
Configuration
The config accepts a dict, a JSON file path, or a typed ModelEvalConfig object. Key fields:
| Field | Required | Description |
|---|---|---|
models |
Yes | Models to evaluate; each has a name and optional prompt_manipulation list |
prompt |
Yes | Prompt template containing {document} |
output_dir |
No | Where to write results (can also be passed to run()) |
few_shot |
No | Settings for few-shot example generation |
output_formats |
No | ["html"] by default; add "pdf" for a PDF report |
Prompt manipulation options (set per model via "prompt_manipulation": [...]):
"explanation"— add chain-of-thought reasoning before the answer"few_shot"— inject generated few-shot examples into the prompt"decompose"— split the prompt into sequential sub-calls (structured extraction only)"hallucination_filter"— drop extracted values not grounded in the source text (structured extraction only)"multi_pass"— call the model twice and merge results (structured extraction only)
See Config Format for the full reference, including field-level metrics, structured extraction, and few-shot settings.
Prefer a guided setup? The Configuration Wizard is a browser-based UI that builds your config file step by step:
poetry run python -m valtron_core.utilities.config_wizard
# or with Docker
docker compose run --rm -p 5000:5000 server poetry run python -m valtron_core.utilities.config_wizard
Open http://localhost:5000 in your browser.
Examples
| Script | What it demonstrates |
|---|---|
examples/sentiment_classification.py |
Label classification across two models |
examples/affiliation_extraction.py |
Structured extraction with field-level metrics |
examples/transformer_comparison.py |
Train a local transformer and compare against cloud LLMs |
examples/multimodal_molecules.py |
Multimodal classification with image attachments |
examples/incremental_evaluation.py |
Load a prior run and add new models without re-evaluating |
# Run any example
poetry run python examples/sentiment_classification.py
# With Docker
docker compose run --rm server poetry run python examples/sentiment_classification.py
See Examples for detailed walkthroughs.
Docker
# Build the image (first run is slow; installs system dependencies including LaTeX)
docker compose build server
# Run an example
docker compose run --rm server poetry run python examples/sentiment_classification.py
# Interactive shell
docker compose run --rm --service-ports server bash
# Run the test suite
docker compose run --rm pipeline-test
Using Ollama with Docker? Set
OLLAMA_API_BASE=http://host.docker.internal:11434in.envso the container can reach Ollama running on your host.
Development
# Install dependencies
poetry install
# Run tests
poetry run pytest
# Run with coverage
poetry run pytest --cov=src/valtron_core --cov-report=html
# Format and lint
poetry run black src/ tests/
poetry run ruff check src/ tests/
poetry run mypy src/
The test suite covers the evaluation pipeline, report generation, prompt optimizers, few-shot generation, transformer training, and the ModelEval recipe. The Docker Compose pipeline-test service runs the full suite with coverage and JUnit XML output.
Architecture
Valtron is built on LiteLLM, which provides a unified interface to 100+ LLM providers. On top of that, Valtron adds the evaluation pipeline, prompt optimization strategies, report generation, and transformer training:
Input data (documents + expected labels)
↓
Config (models, prompt, optimizations)
↓
ModelEval.run()
├── Generate few-shot examples (optional)
├── Prepare per-model prompts (apply manipulations)
├── Evaluate all models concurrently
├── Compute metrics (accuracy, cost, time, field scores)
└── Generate report (HTML + optional PDF)
↓
evaluation_report.html · models/*.json · metadata.json
Full documentation lives in docs/valtron/. To run the docs site locally:
cd docs/valtron && docker compose up
Then open http://localhost:3000.
Contributing
- Fork the repository
- Create a feature branch
- Make your changes and run the test suite
- Submit a pull request
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file valtron_core-0.1.4.tar.gz.
File metadata
- Download URL: valtron_core-0.1.4.tar.gz
- Upload date:
- Size: 164.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.3 CPython/3.14.2 Darwin/24.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
47a6cf2b845ee3ab7b73bb055229c6dbc5d3c68597dbb319854bad2a1209ca08
|
|
| MD5 |
0c00b2fa781f0f679aee36c50b478011
|
|
| BLAKE2b-256 |
522124e43f91884fb9250bcb9f51a1987e7fdc737021deed33fe9e31e1db380b
|
File details
Details for the file valtron_core-0.1.4-py3-none-any.whl.
File metadata
- Download URL: valtron_core-0.1.4-py3-none-any.whl
- Upload date:
- Size: 181.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.3 CPython/3.14.2 Darwin/24.6.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1d650ae235f5d91527b018b0aa5a27f33eb6009d964f7164b4b26c18673e2600
|
|
| MD5 |
4e2c6b2ab634de9f5ef6537c54acfdaa
|
|
| BLAKE2b-256 |
d863b798d57706463830d888ee528abc193f5180d437a267feb5c12e95045468
|