Skip to main content

AlbusOS - Framework for building multi-agent systems with pathway-based execution

Project description

AlbusOS

Python framework for building agentic workflows as composable state graphs.

pip install albusos

Optional extras

Extra What it adds
pip install albusos[cli] albus CLI command (typer + rich)
pip install albusos[server] FastAPI HTTP server + uvicorn
pip install albusos[web] Web tools (DuckDuckGo search)
pip install albusos[ollama] Local LLM via Ollama
pip install albusos[mcp] MCP client adapter
pip install albusos[otel] OpenTelemetry tracing
pip install albusos[metrics] Prometheus metrics

Quick Start

Requires Python 3.13+

pip install albusos
export OPENROUTER_API_KEY="..."   # or OPENAI_API_KEY, or run Ollama locally

Simple agent (LLM + tools loop)

import asyncio
from albusos import agent, run

researcher = agent(
    "researcher",
    instructions="Research topics and provide concise summaries.",
    tools=["web.*", "memory.*"],
)

async def main():
    result = await run(researcher, "What is quantum computing?")
    print(result.response)

asyncio.run(main())

agent() auto-loads tools and LLM providers. run() wires the engine internally. For most single-agent use cases, this is all you need.

Multi-turn conversations

from albusos import agent, Session

researcher = agent("researcher", instructions="Research topics.", tools=["web.*"])

async def main():
    session = Session(researcher)
    r1 = await session.run("What is quantum computing?")
    r2 = await session.run("Tell me more about qubits specifically")
    print(r2.response)  # Full conversation context

asyncio.run(main())

Custom pathways (where the real power is)

When you need explicit multi-step workflows -- branching, chaining tools, routing between agents -- you compose them as executable graphs using PathwayBuilder:

from albusos import PathwayBuilder, AgentBuilder, run

# A triage workflow: lookup → classify → branch → act
triage = (
    PathwayBuilder("triage", pathway_id="triage")
    .tool("lookup", "servicem8.search_customer", args={"query": "{{input.goal}}"})
    .llm("classify", "Classify urgency based on: {{lookup.output}}", model="fast")
    .conditional("check", "{{classify.output.urgency}} == 'high'", "escalate", "standard")
    .llm("escalate", "Create urgent job: {{input.goal}}", tools=["servicem8.*"])
    .llm("standard", "Create standard job: {{input.goal}}", tools=["servicem8.*"])
    .connect("input", "lookup")
    .connect("lookup", "classify")
    .connect("classify", "check")
    .connect("check", "escalate")
    .connect("check", "standard")
    .connect("escalate", "output")
    .connect("standard", "output")
    .build()
)

agent_def = AgentBuilder().id("dispatch").pathway("triage").tool("servicem8.*").build()

async def main():
    result = await run(agent_def, "Toilet overflow at 42 Smith St", pathway=triage)
    print(result.response)

The pathway gets: parallel execution, timeouts, execution budgets, observability, and the ability to nest inside other pathways -- for free. You declare the workflow; the VM handles the execution.

Loading custom tools

from albusos import load_tools, load_skill

# Load a directory of tool scripts (each .py with async def run())
load_tools("skills/servicem8/tools", namespace="servicem8")

# Or load a full skill (SKILL.md + tools/ + auto-registration)
load_skill("skills/servicem8")

CLI

Install the CLI extra for terminal workflows:

pip install albusos[cli]
# Scaffold a new project
albus init my-project

# Explore your workspace
albus info
albus list agents
albus list tools "web.*"

# Run an agent interactively
albus run researcher

# Single-shot with JSON output
albus run researcher "What is quantum computing?" --json

# Start the HTTP server
albus serve --port 8080

# Validate all pathways (CI-friendly — exits 1 on errors)
albus validate

# Visualize a pathway as Mermaid (pipe to mmdc for SVG)
albus viz triage

# Inspect execution traces
albus trace list
albus trace show <execution_id>
albus trace export <execution_id> --format html

Every command supports --help for full option documentation.


What is AlbusOS?

AlbusOS gives you three things:

  1. Simple agents -- agent() + run() for LLM-with-tools. The on-ramp.
  2. Composable workflows -- PathwayBuilder for multi-step agentic state graphs. The main event.
  3. Multi-agent orchestration -- agent.turn and agent.list for routing between specialized agents.
albusos (the framework)                 Your repo (the product)
├── core/           Pathway VM, nodes   ├── skills/       SKILL.md + tools/
├── stdlib/         LLM routing, tools  ├── agents.py     Agent definitions
└── infrastructure/ Sandbox, tools      └── app.py        Your transport (FastAPI, etc.)

AlbusOS handles: Execution engine, LLM routing, tool registry, built-in tools, observability, state management, pathway composition.

Your repo handles: Domain tools, agent configs, workflows, and transport.


Writing Tools

Each tool is a single Python file with an async def run() function:

"""Search for ServiceM8 jobs by status."""

from albusos import ToolOutput


async def run(status: str = "open", limit: int = 20) -> ToolOutput:
    """
    Args:
        status: Job status filter (open, completed, all)
        limit: Maximum results to return
    """
    jobs = await servicem8_api.list_jobs(status=status, limit=limit)
    return ToolOutput(success=True, data={"jobs": jobs})

Place tools inside a skill directory:

skills/
└── servicem8/
    ├── SKILL.md              # Instructions for the agent
    └── tools/
        ├── list_jobs.py      # → servicem8.list_jobs
        ├── create_job.py     # → servicem8.create_job
        └── update_status.py  # → servicem8.update_status

Tools are auto-discovered and named {skill}.{file}. No decorators, no registration, no class hierarchies.


Pathways

Pathways are composable state graphs. agent() uses the built-in tool-calling loop by default. PathwayBuilder lets you compose custom workflows when you need explicit control.

Node types

Type Builder method What it does
input .input() Declare pathway inputs with schema
output .output() Map pathway outputs from upstream nodes
llm .llm() LLM call with optional tool-calling loop
tool .tool() Call any registered tool
conditional .conditional() Branch on a condition (if/else routing)
transform .transform() Evaluate a safe expression
pathway .sub_pathway() Nest a sub-pathway (composition)
code_execute .code_execute() Run sandboxed Python code
loop .loop_node() Iterate body nodes until condition met
stage .stage() Stateful workflow stage with transitions
checkpoint .checkpoint() Pause for human approval / persistence

Execution modes

Mode Behavior Use when
dag (default) Parallel, no cycles Pipelines, fan-out/fan-in
stateful Sequential, cycles OK Conversations, human-in-the-loop

Template expressions

Reference upstream node outputs anywhere with {{node_id.output}} or {{node_id.output.field}}:

.llm("summarize", "Summarize: {{search.output.results}}")
.tool("fetch", "web.fetch", args={"url": "{{input.url}}"})
.conditional("check", "{{classify.output.urgent}} == true", "fast_path", "slow_path")

Composition

Pathways can nest inside other pathways, enabling modular workflow design:

research = PathwayBuilder("research", pathway_id="research").llm("r", "...").build()
summarize = PathwayBuilder("summarize", pathway_id="summarize").llm("s", "...").build()

pipeline = (
    PathwayBuilder("full", pathway_id="full")
    .sub_pathway("step1", research)
    .sub_pathway("step2", summarize)
    .connect("input", "step1")
    .connect("step1", "step2")
    .connect("step2", "output")
    .build()
)

Architecture

src/
├── albusos/           Public API (start here)
│   ├── agent()            One-call agent factory
│   ├── run()              Zero-wiring execution
│   ├── Session            Multi-turn conversations
│   ├── load_tools()       Load custom tool scripts
│   ├── load_skill()       Load a full skill directory
│   ├── load_workspace()   Convention-based project discovery
│   └── cli/               CLI commands (albus)
├── core/              Engine (framework internals)
│   ├── runner.py          Session, default pathway, wiring
│   ├── agent.py           Agent runtime + AgentRepository
│   ├── config.py          Pydantic Settings (env vars, .env)
│   ├── builders/          PathwayBuilder, AgentBuilder, SkillBuilder
│   ├── pathways/          VM, nodes, DAG/stateful schedulers
│   ├── llm/               Provider protocol + capability routing + retry
│   ├── types/             Pydantic models (AgentDefinition, etc.)
│   └── protocols/         Interfaces (PathwayVMLike, StateStoreLike)
├── stdlib/            Built-in capabilities
│   ├── primitives/        Tools (web, memory, workspace, shell, code)
│   └── bootstrap.py       load_stdlib() — auto-loads tools + providers
└── infrastructure/    Sandbox, tool loader

Layering rules

  • core/ has zero imports from stdlib/ or albusos/
  • stdlib/ imports from core/ only
  • infrastructure/ imports from core/ only
  • albusos/ imports from core/ and stdlib/

Key imports

# Simple agents
from albusos import agent, run, Session

# Custom pathways
from albusos import PathwayBuilder, AgentBuilder, ToolOutput

# Load custom tools / skills
from albusos import load_tools, load_skill, load_workspace

# Types
from albusos import AgentDefinition, Pathway, PathwayMode, ExecutionBudget, ExecutionResult

# Advanced (direct LLM access)
from core.llm import generate, get_provider
from core.llm.providers import ModelCapability, set_runtime_model_config

Built-in Tools

Loaded automatically by agent() and run():

Tool What it does
web.search DuckDuckGo search
web.fetch Fetch a URL (with HTTP error handling)
memory.get / memory.set / memory.search Per-agent key-value memory
memory.shared_get / memory.shared_set Cross-agent shared memory (atomic writes)
workspace.read_file / workspace.write_file / workspace.list_files File I/O
shell.execute Run shell commands
code.execute Sandboxed Python execution
code.run_test Run pytest tests
agent.turn / agent.list Multi-agent orchestration

Model Routing

Capability-based model selection -- swap models without changing agent code:

Capability Use for Default
fast Quick tasks, routing openai/gpt-4o-mini
reasoning Complex thinking openai/gpt-4o
code Code generation anthropic/claude-3.5-sonnet
vision Image understanding openai/gpt-4o
local Offline/free llama3.1:8b (Ollama)
# Capability name (recommended) — portable across providers
agent("a", model="reasoning")

# Explicit model (when you need a specific one)
agent("a", model="openai/gpt-4o")

Override at runtime via environment or code:

# Environment variables
export ALBUS_MODEL_FAST="anthropic/claude-haiku"
export ALBUS_MODEL_REASONING="anthropic/claude-sonnet-4"
# Runtime code
from core.llm.providers import set_runtime_model_config
set_runtime_model_config({"reasoning": "anthropic/claude-sonnet-4"})

Configuration

AlbusOS uses Pydantic Settings for centralized config. All env vars are read from the environment and .env automatically.

Variable Purpose Default
OPENROUTER_API_KEY OpenRouter API key (200+ models)
OPENAI_API_KEY Direct OpenAI access (bypasses OpenRouter)
OLLAMA_HOST Ollama server URL http://localhost:11434
ALBUS_MODEL_FAST Override fast model openai/gpt-4o-mini
ALBUS_MODEL_REASONING Override reasoning model openai/gpt-4o
ALBUS_MODEL_CODE Override code model anthropic/claude-3.5-sonnet
ALBUS_LLM_MAX_RETRIES LLM retry count (0-10) 3
ALBUS_LLM_RETRY_BASE_DELAY Retry base delay seconds 1.0

See env.example for a complete template.


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

albusos-0.17.3.tar.gz (217.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

albusos-0.17.3-py3-none-any.whl (292.9 kB view details)

Uploaded Python 3

File details

Details for the file albusos-0.17.3.tar.gz.

File metadata

  • Download URL: albusos-0.17.3.tar.gz
  • Upload date:
  • Size: 217.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for albusos-0.17.3.tar.gz
Algorithm Hash digest
SHA256 010c30969fb3f58a5093b665c30e696a4e0714a74796e349b0befd4f8710979b
MD5 800993a43f0eabfc00dcb0472398815b
BLAKE2b-256 c4724e0c7ca97b908724e2bfb2f60ca82fc3c0f18f1275646f175e00e8a391cd

See more details on using hashes here.

Provenance

The following attestation bundles were made for albusos-0.17.3.tar.gz:

Publisher: deploy.yml on albusOS/AlbusOS

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file albusos-0.17.3-py3-none-any.whl.

File metadata

  • Download URL: albusos-0.17.3-py3-none-any.whl
  • Upload date:
  • Size: 292.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for albusos-0.17.3-py3-none-any.whl
Algorithm Hash digest
SHA256 10163a1f37416d882c2ce17bb3e68edfb2a3329bd749ade8ecac5e8577e41de9
MD5 75b4691536d0b96c0ee335f9ac1ddddf
BLAKE2b-256 0a78124c0fade1f251890595685b7976b28d4d6b9fdfe3cd0ce6514981c603bd

See more details on using hashes here.

Provenance

The following attestation bundles were made for albusos-0.17.3-py3-none-any.whl:

Publisher: deploy.yml on albusOS/AlbusOS

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page