Async-first base harness for building OpenAI-powered agents.
Project description
Simple Agent Base
simple-agent-base is a small async-first Python package for building OpenAI Responses API agents.
It gives you the pieces most small agent projects need: a request/tool loop, local Python tools, structured outputs, streaming events, chat history, image and file input, MCP tool bridging, and sync wrappers.
It is intentionally not a full agent framework. It does not include planning, retrieval, memory systems, workflow orchestration, or multi-agent primitives.
Quick Start
Requirements:
- Python
3.12+ - An OpenAI API key
uvorpip
1. Install
From PyPI:
python -m pip install simple-agent-base
With uv:
uv add simple-agent-base
From GitHub:
python -m pip install "git+https://github.com/dzintt/simple-agent-base.git"
From a local checkout:
uv sync
2. Configure
export OPENAI_API_KEY="your-key"
export OPENAI_MODEL="gpt-5.5"
You can also pass model directly through AgentConfig.
3. Run an agent
import asyncio
from simple_agent_base import Agent, AgentConfig, tool
@tool
async def ping(message: str) -> str:
"""Echo a message back."""
return f"pong: {message}"
async def main() -> None:
async with Agent(
config=AgentConfig(model="gpt-5.5"),
tools=[ping],
system_prompt="You are concise.",
) as agent:
result = await agent.run("Call ping with hello and tell me the result.")
print(result.output_text)
asyncio.run(main())
Agent supports async with (and with for sync code) so cleanup happens automatically. If you prefer explicit lifecycle management, await agent.aclose() and agent.close() still work.
Core API
Most projects use these exports:
AgentAgentConfigChatSessionChatMessageTextPart,ImagePart,FilePartToolRegistrytoolMCPServer
Common calls:
result = await agent.run("Say hello.")
async for event in agent.stream("Explain async IO."):
...
chat = agent.chat(system_prompt="Be brief.")
await chat.run("My name is Anson.")
await chat.run("What is my name?")
AgentRunResult includes:
output_textoutput_datafor structured outputtool_resultsmcp_callsreasoning_summaryresponse_idusageusage_by_responseraw_responses
How It Works
Agent.run(...)orAgent.stream(...)receives a string or message list.- The input is converted to Responses API items.
- A convenience
system_promptis sent as adevelopermessage. - The OpenAI provider sends the request.
- If the model returns tool calls, local or MCP tools run and their outputs are appended.
- The loop repeats until the model returns a final response or
max_turnsis reached.
Tools
Use @tool on async or sync Python functions:
from simple_agent_base import tool
@tool
def lookup_user(user_id: int) -> str:
"""Fetch a user record."""
return '{"id": 1, "name": "Ada"}'
Tool parameters must have type annotations. *args and **kwargs are rejected. The first docstring line becomes the tool description unless you override it:
@tool(name="lookup_user", description="Fetch a user record.")
def get_user(user_id: int) -> str:
return '{"id": 1, "name": "Ada"}'
Parallel same-turn tool execution is opt-in:
agent = Agent(
config=AgentConfig(model="gpt-5.5", parallel_tool_calls=True),
tools=[get_weather, get_news],
)
Only enable it for independent tools.
Set tool_timeout when each local or MCP tool call should have a maximum runtime:
agent = Agent(
config=AgentConfig(model="gpt-5.5", tool_timeout=30.0),
tools=[lookup_user],
)
Timeouts raise ToolExecutionError. For sync tools, the timeout stops waiting for the result, but Python cannot forcibly stop the worker thread.
Hosted Tools
Some providers (notably OpenAI) execute tools server-side and return the result directly in the response. These do not have a Python implementation — you just declare them and the provider handles execution.
agent = Agent(
config=AgentConfig(model="gpt-5.5"),
hosted_tools=[{"type": "web_search"}],
)
result = await agent.run("What's new in Python 3.13?")
print(result.output_text)
Hosted tool entries are passed through to the provider unchanged. Common types on the OpenAI Responses API include web_search, file_search, code_interpreter, image_generation, and computer_use.
Support depends on the provider. Real OpenAI supports the full set; OpenAI-compatible proxies and self-hosted servers usually support a subset or none. If your provider rejects a tool type, the error surfaces from the provider, not from this library.
Hosted tools do not appear in result.tool_results, but streaming can emit hosted_tool_call_started, hosted_tool_call_updated, and hosted_tool_call_completed events for supported provider-side calls.
Streaming
async for event in agent.stream("Explain async IO in one sentence."):
if event.type == "text_delta":
print(event.delta, end="")
elif event.type == "completed":
print(event.result.output_text)
Event types include:
text_deltareasoning_deltatool_arguments_deltahosted_tool_call_startedhosted_tool_call_updatedhosted_tool_call_completedtool_call_startedtool_call_completedmcp_approval_requestedmcp_call_startedmcp_call_completedcompleted
Structured Output
Pass a Pydantic model as response_model:
from pydantic import BaseModel
class Person(BaseModel):
name: str
age: int
result = await agent.run(
"Extract the person from: Sarah is 29 years old.",
response_model=Person,
)
print(result.output_data)
Structured output works with normal runs, streaming, and tool calls.
Chat Sessions
ChatSession keeps in-memory history:
chat = agent.chat(system_prompt="You are concise.")
await chat.run("My name is Anson.")
result = await chat.run("What is my name?")
print(result.output_text)
print(chat.history)
Snapshots can be stored and restored:
payload = chat.export()
restored = agent.chat_from_snapshot(payload)
Snapshots include conversation items and the chat-level system_prompt. They do not include model config, tools, or provider settings.
Images and Files
Use content parts when a message needs more than plain text:
from simple_agent_base import ChatMessage, ImagePart, TextPart
result = await agent.run(
[
ChatMessage(
role="user",
content=[
TextPart("Describe this image."),
ImagePart.from_file("cat.png"),
],
)
]
)
Use FilePart.from_file(...) for local documents or from_url(...) for hosted files. Local helpers convert files to Base64 data URLs; they do not use the OpenAI Files API.
MCP Tools
Client-side MCP servers can be exposed to the model as function tools:
import sys
from pathlib import Path
from simple_agent_base import Agent, AgentConfig, MCPServer
server_path = Path("tests/fixtures/mcp_demo_server.py").resolve()
agent = Agent(
config=AgentConfig(model="gpt-5.5"),
mcp_servers=[
MCPServer.stdio(
name="demo",
command=sys.executable,
args=[str(server_path), "stdio"],
require_approval=False,
)
],
)
Discovered MCP tools are namespaced as server__tool. Use allowed_tools to expose only specific tools, and set require_approval=True with an approval_handler when calls need local confirmation.
Supported transports:
MCPServer.stdio(...)MCPServer.http(...)
Configuration
AgentConfig(
model="gpt-5.5",
api_key=None,
base_url=None,
max_turns=8,
parallel_tool_calls=False,
reasoning_effort=None,
temperature=None,
timeout=None,
tool_timeout=None,
)
Environment variables:
OPENAI_API_KEYOPENAI_MODELOPENAI_BASE_URLOPENAI_REASONING_EFFORT
Sync Usage
The package is async-first, but synchronous programs can use:
with Agent(config=AgentConfig(model="gpt-5.5")) as agent:
result = agent.run_sync("Say hello.")
print(result.output_text)
agent.close() is also available if you'd rather manage the lifecycle yourself.
Do not call run_sync() or stream_sync() from inside an existing event loop.
Examples and Docs
Start with:
- examples/basic_agent.py
- examples/structured_output.py
- examples/streaming.py
- examples/chat_session.py
- examples/mcp_server.py
More details:
Development
Install development dependencies:
uv sync --dev
Run tests:
uv run pytest
Run the live provider check:
uv run python scripts/live_e2e_test.py
Unit tests do not require an API key. The live script does.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file simple_agent_base-0.1.0.tar.gz.
File metadata
- Download URL: simple_agent_base-0.1.0.tar.gz
- Upload date:
- Size: 142.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
48bc361611234c9bb4ec6bfa0a77e7713bc70370139d035c303ed663bb2cc6ca
|
|
| MD5 |
20919f48c471d398d6ee4770974b4203
|
|
| BLAKE2b-256 |
e93c17bc6596bde30e5b9cb8b0b9c2d37adaf456bafabbb5b6cb95bf959a64a1
|
Provenance
The following attestation bundles were made for simple_agent_base-0.1.0.tar.gz:
Publisher:
release.yml on dzintt/simple-agent-base
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
simple_agent_base-0.1.0.tar.gz -
Subject digest:
48bc361611234c9bb4ec6bfa0a77e7713bc70370139d035c303ed663bb2cc6ca - Sigstore transparency entry: 1417638154
- Sigstore integration time:
-
Permalink:
dzintt/simple-agent-base@5dfe419973c4c999924e88be99ab1cd389fcb967 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/dzintt
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@5dfe419973c4c999924e88be99ab1cd389fcb967 -
Trigger Event:
release
-
Statement type:
File details
Details for the file simple_agent_base-0.1.0-py3-none-any.whl.
File metadata
- Download URL: simple_agent_base-0.1.0-py3-none-any.whl
- Upload date:
- Size: 29.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9653be411460a9f3886a5dd56699a4a46ea3a45ca4efa50cf19c3c58fdc1c5bb
|
|
| MD5 |
44c15af671ad495e177f6825cd753870
|
|
| BLAKE2b-256 |
c45047722e226bf96f5e778aafe446f0b5b7d29d0512ca12664f8fd5650f1fc8
|
Provenance
The following attestation bundles were made for simple_agent_base-0.1.0-py3-none-any.whl:
Publisher:
release.yml on dzintt/simple-agent-base
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
simple_agent_base-0.1.0-py3-none-any.whl -
Subject digest:
9653be411460a9f3886a5dd56699a4a46ea3a45ca4efa50cf19c3c58fdc1c5bb - Sigstore transparency entry: 1417638165
- Sigstore integration time:
-
Permalink:
dzintt/simple-agent-base@5dfe419973c4c999924e88be99ab1cd389fcb967 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/dzintt
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@5dfe419973c4c999924e88be99ab1cd389fcb967 -
Trigger Event:
release
-
Statement type: