Adaptive routing for AI agents. Learns which models work best and routes automatically.
Project description
Kalibr
Adaptive routing for AI agents. Kalibr learns which models work best for your tasks and routes automatically.
Installation
pip install kalibr
Quick Start
from kalibr import Router
router = Router(
goal="extract_company",
paths=["gpt-4o", "claude-sonnet-4-20250514"]
)
response = router.completion(
messages=[{"role": "user", "content": "Extract the company: Hi, I'm Sarah from Stripe."}]
)
router.report(success=True)
Kalibr picks the best model, makes the call, and learns from the outcome.
How It Works
- You define paths - models (and optionally tools/params) that can handle your task
- Kalibr picks - uses Thompson Sampling to balance exploration vs exploitation
- You report outcomes - tell Kalibr if it worked
- Kalibr learns - routes more traffic to what works
Paths
A path is a model + optional tools + optional params:
# Just models
paths = ["gpt-4o", "claude-sonnet-4-20250514", "gpt-4o-mini"]
# With tools
paths = [
{"model": "gpt-4o", "tools": ["web_search"]},
{"model": "claude-sonnet-4-20250514", "tools": ["web_search", "browser"]},
]
# With params
paths = [
{"model": "gpt-4o", "params": {"temperature": 0.7}},
{"model": "gpt-4o", "params": {"temperature": 0.2}},
]
Advanced Path Configuration
Routing Between Parameters
Kalibr can route between different parameter configurations of the same model:
from kalibr import Router
router = Router(
goal="creative_writing",
paths=[
{"model": "gpt-4o", "params": {"temperature": 0.3}},
{"model": "gpt-4o", "params": {"temperature": 0.9}},
{"model": "claude-sonnet-4-20250514", "params": {"temperature": 0.7}}
]
)
response = router.completion(messages=[...])
router.report(success=True)
Each unique (model, params) combination is tracked separately. Kalibr learns which configuration works best for your specific goal.
Routing Between Tools
router = Router(
goal="research_task",
paths=[
{"model": "gpt-4o", "tools": ["web_search"]},
{"model": "gpt-4o", "tools": ["code_interpreter"]},
{"model": "claude-sonnet-4-20250514"}
]
)
When to Use get_policy() Instead of Router
For most use cases, use Router. It handles provider dispatching and response conversion automatically.
Use get_policy() for advanced scenarios:
- Integrating with frameworks like LangChain that wrap LLM calls
- Custom retry logic or provider-specific features
- Building tools that need fine-grained control
from kalibr import get_policy, report_outcome
policy = get_policy(goal="summarize")
model = policy["recommended_model"]
# You call the provider yourself
if model.startswith("gpt"):
client = OpenAI()
response = client.chat.completions.create(model=model, messages=[...])
report_outcome(trace_id=trace_id, goal="summarize", success=True)
Outcome Reporting
Automatic (with success_when)
router = Router(
goal="summarize",
paths=["gpt-4o", "claude-sonnet-4-20250514"],
success_when=lambda output: len(output) > 100
)
response = router.completion(messages=[...])
# Outcome reported automatically based on success_when
Manual
router = Router(goal="book_meeting", paths=["gpt-4o", "claude-sonnet-4-20250514"])
response = router.completion(messages=[...])
meeting_created = check_calendar_api()
router.report(success=meeting_created)
LangChain Integration
pip install kalibr[langchain]
from kalibr import Router
router = Router(goal="summarize", paths=["gpt-4o", "claude-sonnet-4-20250514"])
llm = router.as_langchain()
chain = prompt | llm | parser
Auto-Instrumentation
Kalibr auto-instruments OpenAI, Anthropic, and Google SDKs on import:
import kalibr # Must be first import
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(model="gpt-4o", messages=[...])
# Traced automatically
Disable with KALIBR_AUTO_INSTRUMENT=false.
Low-Level API
For advanced use cases, you can use the intelligence API directly:
from kalibr import register_path, decide, report_outcome
# Register paths
register_path(goal="book_meeting", model_id="gpt-4o")
register_path(goal="book_meeting", model_id="claude-sonnet-4-20250514")
# Get routing decision
decision = decide(goal="book_meeting")
model = decision["model_id"]
# Make your own LLM call, then report
report_outcome(trace_id="...", goal="book_meeting", success=True)
Other Integrations
pip install kalibr[crewai] # CrewAI
pip install kalibr[openai-agents] # OpenAI Agents SDK
pip install kalibr[langchain-all] # LangChain with all providers
Configuration
| Variable | Description | Default |
|---|---|---|
KALIBR_API_KEY |
API key from dashboard | Required |
KALIBR_TENANT_ID |
Tenant ID from dashboard | Required |
KALIBR_AUTO_INSTRUMENT |
Auto-instrument LLM SDKs | true |
KALIBR_INTELLIGENCE_URL |
Intelligence service URL | https://kalibr-intelligence.fly.dev |
Development
git clone https://github.com/kalibr-ai/kalibr-sdk-python.git
cd kalibr-sdk-python
pip install -e ".[dev]"
pytest
Contributing
See CONTRIBUTING.md.
License
Apache-2.0
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kalibr-1.2.9.tar.gz.
File metadata
- Download URL: kalibr-1.2.9.tar.gz
- Upload date:
- Size: 100.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
026ae4066a78295256df4302d92693f72daecbd4ba62fa608d64005f4c75d4f1
|
|
| MD5 |
7428510c5e3e14b660f242bf19cfd132
|
|
| BLAKE2b-256 |
d22611851121a43f3976390de06573d9f9666995aef034e138850d2a14825aea
|
File details
Details for the file kalibr-1.2.9-py3-none-any.whl.
File metadata
- Download URL: kalibr-1.2.9-py3-none-any.whl
- Upload date:
- Size: 106.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cef0f7229e25b4731fbbe52e534d4d841ce93ab565099ac44a13c0ac0ff7d270
|
|
| MD5 |
8a8232d2c8e8d2df174146a7c6850545
|
|
| BLAKE2b-256 |
95151f5ad4249bc502d12d0a3c9c145b0adac93bfccddbbad0ba16202172222d
|