python library for creating agents
Project description
exagent
A small Python library for building LLM agents. Provides a minimal set of building blocks — tools, an agent loop, streaming, and observability hooks — without pulling in a large framework.
Works with OpenAI and Anthropic models out of the box.
Note: This project is under active development. APIs may change between versions.
Features
- Tool calling with a
@tooldecorator that infers JSON schema from type hints - Multi-step agent loop that chains tool calls automatically until the model is done
- Streaming or non-streaming — same API, flip a boolean
- Observability hooks for inspecting tool calls and per-iteration responses
- Provider-agnostic — OpenAI and Anthropic supported with a unified interface
- Small surface area — one
Agentclass, one@tooldecorator, a handful of events
Installation
pip install exagent[openai] # OpenAI only
pip install exagent[anthropic] # Anthropic only
pip install exagent[all] # both providers
Requires Python 3.10+.
Set your API key in the environment (or a .env file in your working directory):
export OPENAI_API_KEY=sk-...
# or
export ANTHROPIC_API_KEY=sk-ant-...
Quick start
from exagent import Agent, tool
@tool
def get_weather(city: str, units: str = "celsius") -> str:
"""Return current weather for a city."""
# Stand-in for a real API call
return f"{city}: 18°C, sunny"
class WeatherAgent(Agent):
def __init__(self):
self.system_description = "You are a helpful weather assistant."
self.set_model("openai", "gpt-4.1-mini")
self.add_tool(get_weather)
super().__init__()
agent = WeatherAgent()
answer = agent.run("What's the weather in Paris?")
print(answer)
That's it. The agent will:
- Send your prompt to the model along with the tool definition
- Receive a tool call request
- Execute the handler
- Feed the result back to the model
- Return the final text response
Defining tools
Any function decorated with @tool becomes a tool the model can call. The JSON schema is inferred from type hints; the description comes from the docstring.
from exagent import tool
@tool
def search_products(query: str, max_results: int = 10) -> list:
"""Search the product catalog.
Returns a list of matching products.
"""
...
Supported type hints: str, int, float, bool, list[T], dict, Optional[T], Union[T, None].
Override the name or description if needed:
@tool(name="lookup_price", description="Get the current price of a product by SKU.")
def price(sku: str) -> float:
...
For tools that need full control over the schema, construct a Tool directly:
from exagent import Tool
my_tool = Tool(
name="custom",
description="...",
parameters={"type": "object", "properties": {...}, "required": [...]},
handler=lambda **kwargs: ...,
)
agent.add_tool(my_tool)
The agent loop
agent.run(prompt) drives a loop that handles multi-step tool chains automatically. Each iteration:
- Sends the full conversation history + registered tools to the model
- If the model requests tools, executes them and feeds results back as a user turn
- If the model replies with plain text, returns it as the final answer
The loop stops when the model stops requesting tools, or when max_iterations (default 10) is reached.
Multi-step chaining
Tools chain naturally because each iteration sees the full history, including previous tool results. Example with three tools that depend on each other:
@tool
def find_user(email: str) -> dict:
"""Look up a user by email."""
...
@tool
def list_orders(user_id: str) -> list:
"""List all orders for a user."""
...
@tool
def get_order_status(order_id: str) -> dict:
"""Get the current status of an order."""
...
agent.add_tools([find_user, list_orders, get_order_status])
result = agent.run("What's the status of alice@example.com's most recent order?")
The agent will call find_user → list_orders (using the user ID from step 1) → get_order_status (using the order ID from step 2) → then reply with a natural-language answer. The model decides the chain; the library just shuttles data between steps.
Two entry points: run() and stream()
agent.run(prompt) uses the provider's non-streaming API — one blocking request per turn — and returns the final text as a string:
text = agent.run("Summarize these docs")
agent.stream(prompt) uses the provider's streaming API and yields events as they arrive — ideal for showing live progress in a CLI or UI:
for event in agent.stream("Summarize these docs"):
if event["type"] == "text_delta":
print(event["text"], end="", flush=True)
elif event["type"] == "done":
print()
Both methods drive the same tool-calling loop. The only difference is how you consume the output.
Event types (from stream())
| Event | When | Payload |
|---|---|---|
text_delta |
As tokens arrive | text: str |
tool_call |
When the model finalizes a tool call | tool_call: ToolCall |
tool_result |
After a tool handler runs | id, name, content, is_error |
done |
Loop complete | text: str (final assistant text) |
Observability hooks
Both run() and stream() accept two optional callbacks for visibility into the loop:
def log_tool(tc):
print(f"→ {tc.name}({tc.input})")
def log_iteration(i, response):
print(f"[iter {i}] text={response.text[:60]!r} tools={len(response.tool_calls)}")
agent.run(
"Check inventory for SKU-42 and reorder if below 10.",
on_tool_call=log_tool,
on_iteration=log_iteration,
)
on_tool_call(tool_call)fires once per tool call, just before the handler runson_iteration(iteration, response)fires at the end of each model turn with the fullProviderResponse(text, tool_calls, stop_reason).iterationis 1-indexed.
Hooks work in both streaming and non-streaming modes.
Providers
Switch providers via set_model(provider, model_name). Any model string the underlying SDK accepts works:
self.set_model("openai", "gpt-4.1-mini")
self.set_model("openai", "gpt-4.1")
self.set_model("anthropic", "claude-sonnet-4-5")
self.set_model("anthropic", "claude-opus-4-5")
Provider-specific keyword arguments pass through to the SDK client:
self.set_model("anthropic", "claude-sonnet-4-5", max_tokens=4096, api_key="...")
Configuration patterns
Subclassing (for reusable agents)
class SupportAgent(Agent):
def __init__(self):
self.system_description = "You are a customer support assistant."
self.set_model("openai", "gpt-4.1-mini")
self.add_tools([find_user, list_orders, get_order_status])
super().__init__()
agent = SupportAgent()
Inline (for one-off use)
agent = Agent()
agent.system_description = "You are a helpful assistant."
agent.set_model("openai", "gpt-4.1-mini")
agent.add_tool(my_tool)
# Re-run post-init to rebuild the system prompt with tools/skills registered:
Agent.__init__(agent)
Skills (optional)
Skills are markdown files with YAML frontmatter that describe named capabilities:
---
name: writing
description: Use this skill when writing clearly for developers.
---
Additional body content here.
Load them onto an agent:
self.load_system_skill("skills/writing.md")
self.load_system_skills(["skills/writing.md", "skills/debugging.md"])
The skill's name and description are injected into the system prompt as a menu of capabilities the model should apply when relevant.
API reference
Agent
agent.set_model(provider, model, **kwargs)— configure the LLMagent.add_tool(tool)/agent.add_tools([...])— register tools (acceptsToolinstances or raw functions)agent.load_system_skill(path)/agent.load_system_skills([paths])— register skills from diskagent.run(prompt, max_iterations=10, on_tool_call=None, on_iteration=None) -> str— drive the loop with the non-streaming provider API and return the final textagent.stream(prompt, max_iterations=10, on_tool_call=None, on_iteration=None) -> Iterator[dict]— drive the loop with the streaming provider API and yield eventsagent.chat_history: list— full conversation history including tool calls and results
@tool decorator
@tool
@tool(name="custom_name", description="custom description")
Tool
Tool(name, description, parameters, handler)— manual constructiontool.run(arguments: dict)— invoke with a dict of argumentstool.to_anthropic()/tool.to_openai()— provider-specific tool definitions
Event shapes
{"type": "text_delta", "text": str}
{"type": "tool_call", "tool_call": ToolCall}
{"type": "tool_result", "id": str, "name": str, "content": str, "is_error": bool}
{"type": "done", "text": str}
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file exagent-0.0.8.tar.gz.
File metadata
- Download URL: exagent-0.0.8.tar.gz
- Upload date:
- Size: 16.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a85d287bc75b7ea6e8ba5003adc2000fdbd2766c23611622f855354d05c185ca
|
|
| MD5 |
bbf10ca87af4fe946fe927bb1754c428
|
|
| BLAKE2b-256 |
172191b13b2b4d075eed076839ea937013223a399b6003c3aa61ae00a4df921b
|
Provenance
The following attestation bundles were made for exagent-0.0.8.tar.gz:
Publisher:
release.yml on SachinDas246/exagent
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
exagent-0.0.8.tar.gz -
Subject digest:
a85d287bc75b7ea6e8ba5003adc2000fdbd2766c23611622f855354d05c185ca - Sigstore transparency entry: 1243665002
- Sigstore integration time:
-
Permalink:
SachinDas246/exagent@6a00c9b23aa2ecf95ffc21a239fa61c439d65995 -
Branch / Tag:
refs/tags/0.0.8 - Owner: https://github.com/SachinDas246
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6a00c9b23aa2ecf95ffc21a239fa61c439d65995 -
Trigger Event:
push
-
Statement type:
File details
Details for the file exagent-0.0.8-py3-none-any.whl.
File metadata
- Download URL: exagent-0.0.8-py3-none-any.whl
- Upload date:
- Size: 18.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
218545acd57e3bb66a5a84b242a45637d314fd0a19b9f903527d74b15a386098
|
|
| MD5 |
dc472dd1b043823c9a80b6eed586bcb3
|
|
| BLAKE2b-256 |
c602dc6203be3cc430bf3f9d10ad74a397c60f0da56851b0fd40d6261739f931
|
Provenance
The following attestation bundles were made for exagent-0.0.8-py3-none-any.whl:
Publisher:
release.yml on SachinDas246/exagent
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
exagent-0.0.8-py3-none-any.whl -
Subject digest:
218545acd57e3bb66a5a84b242a45637d314fd0a19b9f903527d74b15a386098 - Sigstore transparency entry: 1243665013
- Sigstore integration time:
-
Permalink:
SachinDas246/exagent@6a00c9b23aa2ecf95ffc21a239fa61c439d65995 -
Branch / Tag:
refs/tags/0.0.8 - Owner: https://github.com/SachinDas246
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6a00c9b23aa2ecf95ffc21a239fa61c439d65995 -
Trigger Event:
push
-
Statement type: