Agent runtime with SDK imports, bundled CLI, optional daemon bootstrap, and built-in tools
Project description
yuuagents
A minimal Python agent runtime. Write a one-liner or deploy a persistent daemon — same codebase, two paths.
Agent = Persona + Tools + LLM
Quick links · SDK path · Service path · Flow concept · Built-in tools · Config reference
Two Ways to Run
| SDK Path | Service Path | |
|---|---|---|
| When to use | Embed in your code, notebooks, pipelines | Long-running tasks, background work, multi-agent |
| Entry point | from yuuagents import LocalAgent |
yagents up |
| Persistence | No (ephemeral) | Yes (SQLite snapshots) |
| Docker tools | No | Yes (optional) |
| Daemon required | No | Yes |
Install
Requirements: Python 3.14+, and optionally Docker Engine for sandboxed tool execution.
pip install yuuagents
Optional extras:
pip install 'yuuagents[docker]' # execute_bash, read_file, edit_file, delete_file
pip install 'yuuagents[web]' # web_search (requires Tavily API key)
pip install 'yuuagents[all]' # everything
SDK Path (Local)
Run an agent in-process. No daemon, no Docker, no database.
30-second quickstart
import asyncio
import yuullm
from yuuagents import run_once
async def main():
llm = yuullm.YLLMClient(
provider="openai",
api_key_env="OPENAI_API_KEY",
default_model="gpt-4o-mini",
)
result = await run_once("Summarise the Zen of Python.", llm=llm)
print(result.output_text)
asyncio.run(main())
Stateful agent with streaming
from uuid import uuid4
import yuutools as yt
from yuuagents import AgentConfig, AgentContext, Session, LocalRun
from yuuagents.input import conversation_input_from_text
config = AgentConfig(agent_id="coder", llm=llm, system="You are a concise coding assistant.")
ctx = AgentContext(task_id=uuid4().hex, agent_id="coder", workdir=".")
session = Session(config=config, context=ctx)
agent_input = conversation_input_from_text("List the files in the current directory.")
session.start(agent_input)
run = LocalRun(session=session, input=agent_input)
async for step in run.step_iter():
print(f"round {step.rounds} tokens={step.tokens}")
result = await run.result()
print(result.output_text)
With custom tools
import yuutools as yt
from yuuagents import run_once
@yt.tool(description="Return the current UTC time.")
async def now() -> str:
from datetime import datetime, timezone
return datetime.now(timezone.utc).isoformat()
result = await run_once("What time is it?", llm=llm, tools=[now])
print(result.output_text)
Multi-agent (delegate tool)
from yuuagents import AgentConfig, AgentContext, Session, LocalAgentPool, LocalRun
from yuuagents.input import conversation_input_from_text
from yuuagents.tools import get as get_builtin_tools
from uuid import uuid4
import yuutools as yt
worker_config = AgentConfig(
agent_id="worker", llm=llm, system="You are a specialist.",
tools=yt.ToolManager(),
)
pool = LocalAgentPool(agents={"worker": worker_config})
orch_config = AgentConfig(
agent_id="orchestrator", llm=llm, system="Delegate to worker.",
tools=yt.ToolManager(get_builtin_tools(["delegate", "wait_background"])),
)
ctx = AgentContext(task_id=uuid4().hex, agent_id="orchestrator", workdir=".", pool=pool)
session = Session(config=orch_config, context=ctx)
agent_input = conversation_input_from_text("Do the task.")
session.start(agent_input)
result = await LocalRun(session=session, input=agent_input).result()
print(result.output_text)
Service Path (Daemon)
The daemon manages long-running tasks over a Unix socket, persists snapshots to SQLite, and optionally runs tools inside Docker containers.
Step 1 — Bootstrap
yagents install
Writes ~/.yagents/config.yaml, initialises the task database, and pulls the Docker runtime image (if Docker-backed tools are configured).
Step 2 — Start
yagents up -d # background process or systemd user service
Step 3 — Run tasks
# Submit a task
yagents run --agent main --task "Refactor src/util.py to use pathlib"
# Check status
yagents list
yagents status <task_id>
# Read the output
yagents logs <task_id>
# Send a follow-up message to a running task
yagents input <task_id> "Focus on the read_text calls first."
# Cancel
yagents stop <task_id>
Full CLI reference
| Command | Description |
|---|---|
yagents install |
Bootstrap config, directories, database, Docker image |
yagents up [-d] |
Start daemon (-d = background / systemd) |
yagents down |
Stop daemon |
yagents run --agent <id> --task "..." |
Submit a task |
yagents list |
Human-readable task list |
yagents status <task_id> |
JSON status for one task |
yagents logs <task_id> |
Conversation history by role |
yagents input <task_id> "..." |
Send a message to a running task |
yagents stop <task_id> |
Cancel a running task |
yagents config |
Show current resolved config |
yagents config --overrides FILE |
Merge overrides and hot-reload |
yagents config --config FILE |
Replace config and hot-reload |
yagents trace ui |
Open the yuutrace observability UI |
yagents uninstall |
Remove all installed runtime state |
yagents run also accepts --persona, --tools, --model, --container, --image.
The Flow Abstraction
Everything that executes inside yuuagents is a Flow — a generic container that is observable, addressable, and cancellable.
Flow
├── stem append-only event log (what happened)
├── mailbox async message queue (what to do next)
├── children list[Flow] (spawned sub-flows)
└── cancel() propagates recursively (stop everything)
An Agent is a specialised Flow that drives the LLM turn loop:
Agent (a Flow)
├── AgentConfig llm + tools + system prompt (frozen, immutable)
├── messages conversation history
└── steps() AsyncGenerator[StepResult] (call this to run)
At each turn, steps():
- Calls the LLM (streaming)
- Executes any tool calls (optionally in Docker, optionally deferred to background)
- Emits a
StepResultand loops until the model stops
Sub-agents spawned via the delegate tool become child Flows of the parent — inheriting cancellation and sharing the observable event tree.
Snapshots
A Flow can be frozen into an AgentState at any point:
state = await session.snapshot() # messages + usage + rounds
# ... persist to disk, restart daemon, restore ...
session.resume(history=state.messages, conversation_id=state.conversation_id)
Snapshot-based recovery is configured under snapshot: in config.yaml.
Built-in Tools
| Tool | Requires | Description |
|---|---|---|
sleep |
— | Pause execution |
view_image |
— | Decode and display an image |
execute_bash |
docker extra + Docker |
Run shell commands in a container |
read_file |
docker extra + Docker |
Read a file from the container workspace |
edit_file |
docker extra + Docker |
Patch a file in the container workspace |
delete_file |
docker extra + Docker |
Delete a file from the container workspace |
web_search |
web extra + Tavily API key |
Search the web |
delegate |
Daemon + delegate capability | Spawn a sub-agent |
inspect_background |
Daemon | Inspect a deferred background task |
cancel_background |
Daemon | Cancel a background task |
input_background |
Daemon | Send input to a background task |
defer_background |
Daemon | Move a tool call to background |
wait_background |
Daemon | Block until a background task finishes |
Configuration
State lives under ~/.yagents/ by default:
~/.yagents/
├── config.yaml active config
├── tasks.sqlite3 task log and snapshots
├── traces.db LLM traces (yuutrace)
├── yagents.sock Unix socket
└── dockers/ per-container working directories
Key sections in config.yaml:
snapshot:
enabled: false # write AgentState snapshots after each turn
restore_on_start: false # auto-resume incomplete tasks on daemon startup
daemon:
socket: ~/.yagents/yagents.sock
log_level: info
docker:
image: yuuagents-runtime:latest
providers:
openai-default:
api_type: openai-chat-completion
api_key_env: OPENAI_API_KEY
default_model: gpt-4o
agents:
main:
description: Default general-purpose agent.
provider: openai-default
model: gpt-4o
persona: "You are a careful, concise assistant."
tools:
- sleep
- view_image
Copy config.example.yaml and config.overrides.example.yaml from the source repo for the full annotated reference.
Config resolution order (highest wins):
- Bundled package default template
config.overrides.yamlin the current working directoryconfig.overrides.yamlfrom--project-dir--overrides FILEflag--config FILEflag (replaces the default template entirely)
Architecture Overview
SDK path ──────▶ Session / run_once()
LocalAgentPool (multi-agent)
│
┌─────────▼───────────────────────────┐
│ core/flow.py │
│ Flow ◀──── Agent │
│ (observable · addressable · │
│ interruptible execution unit) │
└──────────────┬──────────────────────┘
│
Service ┌────────────────────┐ │ ┌──────────────────────┐
path ───▶│ CLI (yagents) │─┼─│ Daemon (Starlette) │
│ click commands │ │ │ AgentManager │
│ HTTP/Unix socket │ │ │ DockerManager │
└────────────────────┘ └──────────────────────┘
Package dependencies: yuuagents → {yuullm, yuutools, yuutrace}
Development
uv sync
uv run pytest
uv run ruff check src/ tests/
uv run mypy src/
uv build
Tests marked @pytest.mark.live require real external services and are skipped by default.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file yuuagents-0.1.2.tar.gz.
File metadata
- Download URL: yuuagents-0.1.2.tar.gz
- Upload date:
- Size: 65.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a7b7b4bbcbde984968cb56e6a1256921b466e871fe253128d21b30df9d696ffe
|
|
| MD5 |
0982f4e37e05bfe084ffafcdddd12bfd
|
|
| BLAKE2b-256 |
7ce7c1aee7abaebe9964bbd789e07ea27aaee890cc44824890c00e893508aeb0
|
Provenance
The following attestation bundles were made for yuuagents-0.1.2.tar.gz:
Publisher:
publish-pypi.yml on yuulabs/yuuagents
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
yuuagents-0.1.2.tar.gz -
Subject digest:
a7b7b4bbcbde984968cb56e6a1256921b466e871fe253128d21b30df9d696ffe - Sigstore transparency entry: 1203313080
- Sigstore integration time:
-
Permalink:
yuulabs/yuuagents@6c44c9e842c896976c8cfc2c1cd722b9f893712e -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/yuulabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@6c44c9e842c896976c8cfc2c1cd722b9f893712e -
Trigger Event:
push
-
Statement type:
File details
Details for the file yuuagents-0.1.2-py3-none-any.whl.
File metadata
- Download URL: yuuagents-0.1.2-py3-none-any.whl
- Upload date:
- Size: 79.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ad69716c736165d8feb76512be8868e70c37bf86bc70119df0a5e7ec15a20153
|
|
| MD5 |
d3f1c51dc0d4959bfbbad4e1245fa2f9
|
|
| BLAKE2b-256 |
66a94644850583c6038ac377d72b7459769c7e5c95df4d22b0692fde22f26a06
|
Provenance
The following attestation bundles were made for yuuagents-0.1.2-py3-none-any.whl:
Publisher:
publish-pypi.yml on yuulabs/yuuagents
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
yuuagents-0.1.2-py3-none-any.whl -
Subject digest:
ad69716c736165d8feb76512be8868e70c37bf86bc70119df0a5e7ec15a20153 - Sigstore transparency entry: 1203313083
- Sigstore integration time:
-
Permalink:
yuulabs/yuuagents@6c44c9e842c896976c8cfc2c1cd722b9f893712e -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/yuulabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@6c44c9e842c896976c8cfc2c1cd722b9f893712e -
Trigger Event:
push
-
Statement type: