AlbusOS - Framework for building multi-agent systems with pathway-based execution
Project description
AlbusOS
Python framework for building agentic workflows as composable state graphs.
pip install albusos
Quick Start
Requires Python 3.13+
pip install albusos
export OPENROUTER_API_KEY="..." # or OPENAI_API_KEY, or run Ollama locally
Simple agent (LLM + tools loop)
import asyncio
from albusos import agent, run
researcher = agent(
"researcher",
instructions="Research topics and provide concise summaries.",
tools=["web.*", "memory.*"],
)
async def main():
result = await run(researcher, "What is quantum computing?")
print(result.response)
asyncio.run(main())
agent() auto-loads tools and LLM providers. run() wires the engine internally.
For most single-agent use cases, this is all you need.
Multi-turn conversations
from albusos import agent, Session
researcher = agent("researcher", instructions="Research topics.", tools=["web.*"])
async def main():
session = Session(researcher)
r1 = await session.run("What is quantum computing?")
r2 = await session.run("Tell me more about qubits specifically")
print(r2.response) # Full conversation context
asyncio.run(main())
Custom pathways (where the real power is)
When you need explicit multi-step workflows -- branching, chaining tools, routing between agents -- you compose them as executable graphs using PathwayBuilder:
from albusos import PathwayBuilder, AgentBuilder, run
# A triage workflow: lookup → classify → branch → act
triage = (
PathwayBuilder("triage", pathway_id="triage")
.tool("lookup", "servicem8.search_customer", args={"query": "{{input.goal}}"})
.llm("classify", "Classify urgency based on: {{lookup.output}}", model="fast")
.conditional("check", "{{classify.output.urgency}} == 'high'", "escalate", "standard")
.llm("escalate", "Create urgent job: {{input.goal}}", tools=["servicem8.*"])
.llm("standard", "Create standard job: {{input.goal}}", tools=["servicem8.*"])
.connect("input", "lookup")
.connect("lookup", "classify")
.connect("classify", "check")
.connect("check", "escalate")
.connect("check", "standard")
.connect("escalate", "output")
.connect("standard", "output")
.build()
)
agent_def = AgentBuilder().id("dispatch").pathway("triage").tool("servicem8.*").build()
async def main():
result = await run(agent_def, "Toilet overflow at 42 Smith St", pathway=triage)
print(result.response)
The pathway gets: parallel execution, timeouts, execution budgets, observability, and the ability to nest inside other pathways -- for free. You declare the workflow; the VM handles the execution.
Loading custom tools
from albusos import load_tools, load_skill
# Load a directory of tool scripts (each .py with async def run())
load_tools("skills/servicem8/tools", namespace="servicem8")
# Or load a full skill (SKILL.md + tools/ + auto-registration)
load_skill("skills/servicem8")
What is AlbusOS?
AlbusOS gives you three things:
- Simple agents --
agent()+run()for LLM-with-tools. The on-ramp. - Composable workflows --
PathwayBuilderfor multi-step agentic state graphs. The main event. - Multi-agent orchestration --
agent.turnandagent.listfor routing between specialized agents.
albusos (the framework) Your repo (the product)
├── core/ Pathway VM, nodes ├── skills/ SKILL.md + tools/
├── stdlib/ LLM routing, tools ├── agents.py Agent definitions
└── infrastructure/ Sandbox, tools └── app.py Your transport (FastAPI, etc.)
AlbusOS handles: Execution engine, LLM routing, tool registry, built-in tools, observability, state management, pathway composition.
Your repo handles: Domain tools, agent configs, workflows, and transport.
Writing Tools
Each tool is a single Python file with an async def run() function:
"""Search for ServiceM8 jobs by status."""
from albusos import ToolOutput
async def run(status: str = "open", limit: int = 20) -> ToolOutput:
"""
Args:
status: Job status filter (open, completed, all)
limit: Maximum results to return
"""
jobs = await servicem8_api.list_jobs(status=status, limit=limit)
return ToolOutput(success=True, data={"jobs": jobs})
Place tools inside a skill directory:
skills/
└── servicem8/
├── SKILL.md # Instructions for the agent
└── tools/
├── list_jobs.py # → servicem8.list_jobs
├── create_job.py # → servicem8.create_job
└── update_status.py # → servicem8.update_status
Tools are auto-discovered and named {skill}.{file}. No decorators, no
registration, no class hierarchies.
Pathways
Pathways are composable state graphs. agent() uses the built-in tool-calling
loop by default. PathwayBuilder lets you compose custom workflows when you
need explicit control.
Node types
| Type | Builder method | What it does |
|---|---|---|
input |
.input() |
Declare pathway inputs with schema |
output |
.output() |
Map pathway outputs from upstream nodes |
llm |
.llm() |
LLM call with optional tool-calling loop |
tool |
.tool() |
Call any registered tool |
conditional |
.conditional() |
Branch on a condition (if/else routing) |
transform |
.transform() |
Evaluate a safe expression |
pathway |
.sub_pathway() |
Nest a sub-pathway (composition) |
code_execute |
.code_execute() |
Run sandboxed Python code |
loop |
.loop_node() |
Iterate body nodes until condition met |
stage |
.stage() |
Stateful workflow stage with transitions |
checkpoint |
.checkpoint() |
Pause for human approval / persistence |
Execution modes
| Mode | Behavior | Use when |
|---|---|---|
dag (default) |
Parallel, no cycles | Pipelines, fan-out/fan-in |
stateful |
Sequential, cycles OK | Conversations, human-in-the-loop |
Template expressions
Reference upstream node outputs anywhere with {{node_id.output}} or {{node_id.output.field}}:
.llm("summarize", "Summarize: {{search.output.results}}")
.tool("fetch", "web.fetch", args={"url": "{{input.url}}"})
.conditional("check", "{{classify.output.urgent}} == true", "fast_path", "slow_path")
Composition
Pathways can nest inside other pathways, enabling modular workflow design:
research = PathwayBuilder("research", pathway_id="research").llm("r", "...").build()
summarize = PathwayBuilder("summarize", pathway_id="summarize").llm("s", "...").build()
pipeline = (
PathwayBuilder("full", pathway_id="full")
.sub_pathway("step1", research)
.sub_pathway("step2", summarize)
.connect("input", "step1")
.connect("step1", "step2")
.connect("step2", "output")
.build()
)
Architecture
src/
├── albusos/ Public API (start here)
│ ├── agent() One-call agent factory
│ ├── run() Zero-wiring execution
│ ├── Session Multi-turn conversations
│ ├── load_tools() Load custom tool scripts
│ ├── load_skill() Load a full skill directory
│ └── load_workspace() Convention-based project discovery
├── core/ Engine (framework internals)
│ ├── runner.py Session, default pathway, wiring
│ ├── agent.py Agent runtime + AgentRepository
│ ├── config.py Pydantic Settings (env vars, .env)
│ ├── builders/ PathwayBuilder, AgentBuilder, SkillBuilder
│ ├── pathways/ VM, nodes, DAG/stateful schedulers
│ ├── llm/ Provider protocol + capability routing + retry
│ ├── types/ Pydantic models (AgentDefinition, etc.)
│ └── protocols/ Interfaces (PathwayVMLike, StateStoreLike)
├── stdlib/ Built-in capabilities
│ ├── primitives/ Tools (web, memory, workspace, shell, code)
│ └── bootstrap.py load_stdlib() — auto-loads tools + providers
└── infrastructure/ Sandbox, tool loader
Layering rules
core/has zero imports fromstdlib/oralbusos/stdlib/imports fromcore/onlyinfrastructure/imports fromcore/onlyalbusos/imports fromcore/andstdlib/
Key imports
# Simple agents
from albusos import agent, run, Session
# Custom pathways
from albusos import PathwayBuilder, AgentBuilder, ToolOutput
# Load custom tools / skills
from albusos import load_tools, load_skill, load_workspace
# Types
from albusos import AgentDefinition, Pathway, PathwayMode, ExecutionBudget, ExecutionResult
# Advanced (direct LLM access)
from core.llm import generate, get_provider
from core.llm.providers import ModelCapability, set_runtime_model_config
Built-in Tools
Loaded automatically by agent() and run():
| Tool | What it does |
|---|---|
web.search |
DuckDuckGo search |
web.fetch |
Fetch a URL (with HTTP error handling) |
memory.get / memory.set / memory.search |
Per-agent key-value memory |
memory.shared_get / memory.shared_set |
Cross-agent shared memory (atomic writes) |
workspace.read_file / workspace.write_file / workspace.list_files |
File I/O |
shell.execute |
Run shell commands |
code.execute |
Sandboxed Python execution |
code.run_test |
Run pytest tests |
agent.turn / agent.list |
Multi-agent orchestration |
Model Routing
Capability-based model selection -- swap models without changing agent code:
| Capability | Use for | Default |
|---|---|---|
fast |
Quick tasks, routing | openai/gpt-4o-mini |
reasoning |
Complex thinking | openai/gpt-4o |
code |
Code generation | anthropic/claude-3.5-sonnet |
vision |
Image understanding | openai/gpt-4o |
local |
Offline/free | llama3.1:8b (Ollama) |
# Capability name (recommended) — portable across providers
agent("a", model="reasoning")
# Explicit model (when you need a specific one)
agent("a", model="openai/gpt-4o")
Override at runtime via environment or code:
# Environment variables
export ALBUS_MODEL_FAST="anthropic/claude-haiku"
export ALBUS_MODEL_REASONING="anthropic/claude-sonnet-4"
# Runtime code
from core.llm.providers import set_runtime_model_config
set_runtime_model_config({"reasoning": "anthropic/claude-sonnet-4"})
Configuration
AlbusOS uses Pydantic Settings for centralized config. All env vars are read
from the environment and .env automatically.
| Variable | Purpose | Default |
|---|---|---|
OPENROUTER_API_KEY |
OpenRouter API key (200+ models) | — |
OPENAI_API_KEY |
Direct OpenAI access (bypasses OpenRouter) | — |
OLLAMA_HOST |
Ollama server URL | http://localhost:11434 |
ALBUS_MODEL_FAST |
Override fast model | openai/gpt-4o-mini |
ALBUS_MODEL_REASONING |
Override reasoning model | openai/gpt-4o |
ALBUS_MODEL_CODE |
Override code model | anthropic/claude-3.5-sonnet |
ALBUS_LLM_MAX_RETRIES |
LLM retry count (0-10) | 3 |
ALBUS_LLM_RETRY_BASE_DELAY |
Retry base delay seconds | 1.0 |
See env.example for a complete template.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file albusos-0.16.5.tar.gz.
File metadata
- Download URL: albusos-0.16.5.tar.gz
- Upload date:
- Size: 205.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2d1b2e3ebdee8ff4757612c23524b57498d2db2d6ab61a7578dcfb75c231fd2d
|
|
| MD5 |
197bda46c1883c99e46a307a73497a8a
|
|
| BLAKE2b-256 |
5d5287d4eccace573908ce04f452631a3ec03d53632f709edf75e15d000ae5fd
|
Provenance
The following attestation bundles were made for albusos-0.16.5.tar.gz:
Publisher:
deploy.yml on albusOS/AlbusOS
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
albusos-0.16.5.tar.gz -
Subject digest:
2d1b2e3ebdee8ff4757612c23524b57498d2db2d6ab61a7578dcfb75c231fd2d - Sigstore transparency entry: 937621625
- Sigstore integration time:
-
Permalink:
albusOS/AlbusOS@a261e9ac202ee0bc878c01d29a4cfb97a2effccb -
Branch / Tag:
refs/heads/main - Owner: https://github.com/albusOS
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
deploy.yml@a261e9ac202ee0bc878c01d29a4cfb97a2effccb -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file albusos-0.16.5-py3-none-any.whl.
File metadata
- Download URL: albusos-0.16.5-py3-none-any.whl
- Upload date:
- Size: 277.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6b39253fc961cedfffa8a5ef5e7851e1ce8e045d40ef2e1e519186d992644c55
|
|
| MD5 |
2d6036dfaa015380dc2bcfba95a100e8
|
|
| BLAKE2b-256 |
a50e7ff8f156e005b7c3a07a73ae853ae76230fe727ee9733a2c7d984addc1ca
|
Provenance
The following attestation bundles were made for albusos-0.16.5-py3-none-any.whl:
Publisher:
deploy.yml on albusOS/AlbusOS
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
albusos-0.16.5-py3-none-any.whl -
Subject digest:
6b39253fc961cedfffa8a5ef5e7851e1ce8e045d40ef2e1e519186d992644c55 - Sigstore transparency entry: 937621652
- Sigstore integration time:
-
Permalink:
albusOS/AlbusOS@a261e9ac202ee0bc878c01d29a4cfb97a2effccb -
Branch / Tag:
refs/heads/main - Owner: https://github.com/albusOS
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
deploy.yml@a261e9ac202ee0bc878c01d29a4cfb97a2effccb -
Trigger Event:
workflow_dispatch
-
Statement type: