A minimal, universal agent framework. Zero mandatory dependencies.
Project description
all-in-agent
A minimal, universal agent framework for Python. Zero mandatory dependencies.
pip install all-in-agent
pip install "all-in-agent[openai]" # OpenAI GPT
pip install "all-in-agent[anthropic]" # Anthropic Claude
pip install "all-in-agent[all]" # all optional deps
Why all-in-agent
- 🪶 Zero dependencies — pure stdlib core; adapters are opt-in extras
- 🔌 Pluggable everything — swap LLM adapter, tools, history, or orchestration without touching other parts
- 🔍 Transparent by default — append-only NDJSON event log; every run is replayable
- 🛡️ Safe by default — dangerous tools require explicit approval; budget stops runaway agents
Quick Start
pip install "all-in-agent[openai]" # or [anthropic]
from all_in_agent import Agent, OpenAIAdapter, ToolRegistry, BUILTIN_TOOLS
llm = OpenAIAdapter() # reads OPENAI_API_KEY from env
tools = ToolRegistry()
for t in BUILTIN_TOOLS: # read_file, write_file, bash
tools.register(t)
agent = Agent(llm=llm, tools=tools)
result = agent.run_sync("Summarize README.md in three bullet points")
print(result["final_answer"])
Jupyter Notebook or async framework? Use
await agent.run(goal)directly.
Core Concepts
Node / Flow
Everything is a node. A flow is a graph of nodes.
from all_in_agent import BaseNode, Flow
class MyNode(BaseNode):
async def prep(self, shared: dict):
return shared["input"]
async def exec(self, prep_result):
return prep_result.upper()
async def post(self, shared: dict, exec_result) -> str:
shared["output"] = exec_result
return "default" # action name → next node
node_a = MyNode()
node_b = MyNode()
node_a >> node_b # default edge
# or: (node_a - "custom_action") >> node_b
flow = Flow()
await flow.run(shared={}, start=node_a)
State contract: all inter-node state lives in shared dict. Node instance fields hold only configuration.
Budget & Loop Detection
from all_in_agent import Budget
budget = Budget(
max_llm_calls=40,
max_tool_calls=80,
max_wall_ms=1_800_000, # 30 min wall-clock limit
loop_same_action_limit=3, # raise LoopDetectedError after 3 consecutive identical tool calls
)
agent = Agent(llm=llm, tools=tools, budget=budget)
Tool Registry
from all_in_agent import Tool, ToolRegistry, SideEffectLevel, ToolResponse
async def my_tool(args: dict, run) -> ToolResponse:
result = do_something(args["input"])
return ToolResponse(status="success", content=result)
registry = ToolRegistry(
approval_callback=my_approval_fn # async (name, args) -> bool
)
registry.register(Tool(
name="my_tool",
description="Does something useful",
input_schema={
"type": "object",
"properties": {"input": {"type": "string"}},
"required": ["input"],
},
side_effect_level=SideEffectLevel.READ_ONLY,
execute=my_tool,
))
DANGEROUS tools call approval_callback before executing. Install jsonschema for automatic argument validation.
History & Compression
HistoryManager compresses conversation history when it exceeds COMPRESS_THRESHOLD_TOKENS (14 000 tokens). It keeps the 12 most recent turns and the 3 most recent tool results verbatim, then asks the LLM to summarize everything older into structured JSON (facts / decisions / open_threads).
Event Store
Every run writes an append-only NDJSON log to ./runs/<run_id>/events.ndjson:
{"run_id": "...", "event": "RUN_CREATED", "data": {...}, "ts": "..."}
{"run_id": "...", "event": "ASSISTANT_MESSAGE", "data": {...}, "ts": "..."}
{"run_id": "...", "event": "TOOL_RESULT", "data": {...}, "ts": "..."}
{"run_id": "...", "event": "RUN_STOPPED", "data": {"reason": "goal_met"}, "ts": "..."}
Multi-Agent
from all_in_agent import MessageBus, TaskManager, MessageEnvelope, Task
bus = MessageBus(run_dir="./runs/session_1")
tm = TaskManager(run_dir="./runs/session_1")
# coordinator creates tasks
task = await tm.create_task(goal="Analyze file X")
# worker claims and runs
available = await tm.get_available(agent_id="worker_1")
claimed = await tm.claim_task(available[0].task_id, "worker_1")
# agents communicate
await bus.send(MessageEnvelope(
msg_id="...", run_id="...",
from_agent="worker_1", to_agent="coordinator",
msg_type="TASK_DONE", payload={"result": "..."}, ts="...",
))
TaskManager uses file-based locking (fcntl on Unix, .lock file on Windows) for safe concurrent access. Tasks support dependency chains via dependencies: list[str].
LLM Adapters
| Adapter | Install extra | Environment variable |
|---|---|---|
OpenAIAdapter |
all-in-agent[openai] |
OPENAI_API_KEY |
AnthropicAdapter |
all-in-agent[anthropic] |
ANTHROPIC_API_KEY |
Both adapters retry on transient errors with exponential backoff + jitter.
from all_in_agent import OpenAIAdapter, AnthropicAdapter
llm = OpenAIAdapter(model="gpt-4o-mini", max_retries=3)
llm = AnthropicAdapter(model="claude-sonnet-4-6", max_retries=3)
Architecture
📁 Directory Structure
all_in_agent/
├── core/
│ ├── node.py BaseNode · Node · BatchNode
│ ├── flow.py Flow (graph runner)
│ └── run.py Run · Budget · BudgetExceededError · LoopDetectedError
├── adapters/
│ ├── base.py LLMAdapter · LLMResponse · ToolCall · LLMError · ConfigError
│ ├── anthropic.py AnthropicAdapter (exponential backoff, retry)
│ └── openai.py OpenAIAdapter
├── tools/
│ ├── registry.py ToolRegistry (approval callbacks, jsonschema validation)
│ └── builtin.py read_file · write_file · bash
├── history/
│ ├── manager.py HistoryManager (LLM-based compression)
│ └── store.py FileEventStore (append-only NDJSON)
└── agents/
├── base.py Agent · ReActNode · LLMCallNode · ToolDispatchNode
└── multi.py MessageBus · TaskManager · MessageEnvelope · Task · TaskStatus
Package Naming
The PyPI package is all-in-agent, but the Python import name is all_in_agent:
pip install all-in-agent
from all_in_agent import Agent # Python import name is 'all_in_agent'
The hyphen in the PyPI name can't be used in Python imports, so the module name uses underscores.
Design Goals
- Zero mandatory deps — pure stdlib core; adapters opt-in
- Small — ~120 LOC core loop, readable in one sitting
- Composable — every piece (Node, Tool, Adapter, History) is replaceable
- Safe by default — dangerous tools require approval; budget stops runaway agents
Requirements
Python 3.10+
Optional: anthropic, openai, jsonschema
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file all_in_agent-0.1.3.tar.gz.
File metadata
- Download URL: all_in_agent-0.1.3.tar.gz
- Upload date:
- Size: 20.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fabde55794916803093b86838d44a59b07815b8746e007d8e6e9847195480bb5
|
|
| MD5 |
dfaa1ed0a651711768271a8e49e706de
|
|
| BLAKE2b-256 |
bdc77a9c2c2c62b7a573cd7149d3c88555df40678dc14b3e8f1913923013bd5b
|
File details
Details for the file all_in_agent-0.1.3-py3-none-any.whl.
File metadata
- Download URL: all_in_agent-0.1.3-py3-none-any.whl
- Upload date:
- Size: 24.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f03d2b5222df0d1003187b63fa736ae1c10a180d33ab7ef8eea3ee66333234cd
|
|
| MD5 |
c55fc7e62d4c3467faff6009f00f3fbb
|
|
| BLAKE2b-256 |
560e1213ea31ccc37b592d51af68e7659320a288f459c749700d44ee26eaaeb0
|