kantan-agents is a thin, opinionated wrapper around the OpenAI Agents SDK that makes observability and evaluation “just happen” by default.
Project description
kantan-agents
kantan-agents is a thin, opinionated wrapper around the OpenAI Agents SDK that makes observability and evaluation "just happen" by default.
What it does
- 🚀 Broad model support: switch providers/models by changing a single name, without rewriting Agent code.
- 🔍 Automatic trace metadata: observability and search are ready out of the box.
- 🧪 Prompt version tracking: keep prompt versions and metadata attached to every run.
- 📦 Context-first outputs: store structured output and history for easy reuse.
- 🤝 Tools + multi-agent handoffs: control tool usage with tool_rules and delegate safely.
Kantan Stack (direction)
Kantan Stack aims to turn “build → run → observe/evaluate → improve” into a single, simple path.
OpenAI Agents SDK powers execution under the hood, but the recommended surface area is just
kantan-llm + kantan-agents.
kantan-agents: runtime wrapper with standardized trace metadata (this repo).kantan-llm: model resolution + tracing backbone (dependency).kantan-tools(planned): installable tool packs with clear schemas/permissions.kantan-lab(planned): trace/prompt analysis, evals, and regression detection.
Recommended path (Kantan-first)
- Start with
Agent+Promptfor versioned instructions. - Switch models by name without changing Agent code.
- Enable tracing early (SQLite or your tracer of choice).
- Add tools via entry points and control them with
tool_rules. - Use structured output (and
RUBRIC) to evaluate and iterate.
Escape hatches (when you must)
- Using the Agents SDK directly is an escape hatch; prefer
kantan-llm+kantan-agents. - Async usage is an escape hatch for ASGI; use it only when you must avoid blocking an event loop.
- If you use the Agents SDK directly, keep prompt versions and trace metadata consistent.
- Swap tracing processors to route data to your preferred backend.
Quick Start
from kantan_agents import Agent
agent = Agent(name="basic-agent", instructions="You are a helpful assistant.")
context = agent.run("Hello")
print(context["result"].final_output)
Model selection
from kantan_agents import Agent
agent = Agent(name="basic-agent", instructions="You are a helpful assistant.", model="gpt-5-mini")
context = agent.run("Hello")
print(context["result"].final_output)
Tracing (SQLite)
from kantan_agents import Agent, set_trace_processors
from kantan_llm.tracing import SQLiteTracer
tracer = SQLiteTracer("traces.sqlite3")
set_trace_processors([tracer])
agent = Agent(name="trace-agent", instructions="Answer briefly.")
context = agent.run("Why does tracing help?")
print(context["result"].final_output)
AsyncClientBundle (escape hatch)
from kantan_llm import get_async_llm_client
from kantan_agents import Agent
bundle = get_async_llm_client("gpt-5-mini")
agent = Agent(name="basic-agent", instructions="You are a helpful assistant.", model=bundle)
context = agent.run("Hello")
print(context["result"].final_output)
Async usage (escape hatch)
from kantan_agents import Agent
agent = Agent(name="basic-agent", instructions="You are a helpful assistant.")
context = await agent.run_async("Hello")
print(context["result"].final_output)
Mini Tutorial (friendly tour)
Think of context as a backpack your Agent carries. Each run drops a fresh result in
context["result"], and you can stash structured output or history alongside it.
Step 1: Give your Agent a name tag (Prompt + metadata)
from kantan_agents import Agent, Prompt
prompt = Prompt(
name="qa",
version="v1",
text="Answer in one short sentence.",
meta={"tone": "friendly"},
)
agent = Agent(name="support-agent", instructions=prompt)
context = agent.run("What is trace metadata?")
print(context["result"].final_output)
This keeps your prompt version and metadata attached to every trace.
Step 2: Switch models with one line
from kantan_agents import Agent
agent = Agent(name="switcher", instructions="Answer in one sentence.", model="gpt-5-mini")
context = agent.run("Why does model switching matter?")
print(context["result"].final_output)
Step 3: Turn on tracing (SQLite)
from kantan_agents import set_trace_processors
from kantan_llm.tracing import SQLiteTracer
tracer = SQLiteTracer("traces.sqlite3")
set_trace_processors([tracer])
Now your runs write traces. You can read them with plain SQLite:
import sqlite3
conn = sqlite3.connect("traces.sqlite3")
conn.row_factory = sqlite3.Row
row = conn.execute(
"SELECT id, metadata_json FROM traces ORDER BY id DESC LIMIT 1"
).fetchone()
print(dict(row))
Step 4: Ask for structured output (and keep it)
from pydantic import BaseModel
from kantan_agents import Agent
class Summary(BaseModel):
title: str
bullets: list[str]
agent = Agent(
name="summarizer",
instructions="Summarize in a title and 2 bullets.",
output_type=Summary,
output_dest="summary_json",
)
context = agent.run("Explain why tracing helps teams.")
print(context["summary_json"]["title"])
Step 5: Async in ASGI (client injection)
Use get_async_llm_client() to inject an AsyncOpenAI client into Agents SDK:
from kantan_llm import get_async_llm_client
from kantan_agents import Agent
bundle = get_async_llm_client("gpt-5-mini")
agent = Agent(name="async-agent", instructions="Say hi.", model=bundle)
context = await agent.run_async("Hello")
print(context["result"].final_output)
Docs
docs/concept.mddocs/spec.mddocs/architecture.mddocs/plan.mddocs/tutorial.mddocs/usage.md
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kantan_agents-0.1.3.tar.gz.
File metadata
- Download URL: kantan_agents-0.1.3.tar.gz
- Upload date:
- Size: 17.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
370eb4ee713024028b7242d7020f228e6b0f1532fa80dd0273b40a2d1399e9b9
|
|
| MD5 |
bf35346e12b8a41c3237464a502d3303
|
|
| BLAKE2b-256 |
4436f6204f5f32c8814dc6920b6d73fcd8d00713ce9e070ea3008e4eacfe2511
|
File details
Details for the file kantan_agents-0.1.3-py3-none-any.whl.
File metadata
- Download URL: kantan_agents-0.1.3-py3-none-any.whl
- Upload date:
- Size: 12.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8056606beacb97db52b1b154bf0b7dc6068377177c7aae29af861e3f4ca26f42
|
|
| MD5 |
ff7f91003847f2d551edd03e99b367d1
|
|
| BLAKE2b-256 |
2ab3c7b370b48e344150f161b03c71cbfc0b5e0aa2d4aeecf293f0eac9258423
|