Skip to main content

AlbusOS - Framework for building multi-agent systems with pathway-based execution

Project description

AlbusOS

Python framework for building agentic workflows as composable state graphs.

pip install albusos

Quick Start

Requires Python 3.13+

pip install albusos
export OPENROUTER_API_KEY="..."   # or OPENAI_API_KEY

Simple agent (LLM + tools loop)

import asyncio
from albusos import agent, run

researcher = agent(
    "researcher",
    instructions="Research topics and provide concise summaries.",
    tools=["web.*", "memory.*"],
)

async def main():
    result = await run(researcher, "What is quantum computing?")
    print(result.response)

asyncio.run(main())

agent() auto-loads tools and LLM providers. run() wires the engine internally. For most single-agent use cases, this is all you need.

Multi-turn conversations

from albusos import agent, Session

researcher = agent("researcher", instructions="Research topics.", tools=["web.*"])

async def main():
    session = Session(researcher)
    r1 = await session.run("What is quantum computing?")
    r2 = await session.run("Tell me more about qubits specifically")
    print(r2.response)  # Full conversation context

asyncio.run(main())

Custom pathways (where the real power is)

When you need explicit multi-step workflows -- branching, chaining tools, routing between agents -- you compose them as executable graphs using PathwayBuilder:

from albusos import PathwayBuilder, AgentBuilder, run

# A triage workflow: lookup → classify → branch → act
triage = (
    PathwayBuilder("triage", pathway_id="triage")
    .tool("lookup", "servicem8.search_customer", args={"query": "{{input.goal}}"})
    .llm("classify", "Classify urgency based on: {{lookup.output}}", model="fast")
    .conditional("check", "{{classify.output.urgency}} == 'high'", "escalate", "standard")
    .llm("escalate", "Create urgent job: {{input.goal}}", tools=["servicem8.*"])
    .llm("standard", "Create standard job: {{input.goal}}", tools=["servicem8.*"])
    .connect("input", "lookup")
    .connect("lookup", "classify")
    .connect("classify", "check")
    .connect("check", "escalate")
    .connect("check", "standard")
    .connect("escalate", "output")
    .connect("standard", "output")
    .build()
)

agent_def = AgentBuilder().id("dispatch").pathway("triage").tool("servicem8.*").build()

async def main():
    result = await run(agent_def, "Toilet overflow at 42 Smith St", pathway=triage)
    print(result.response)

The pathway gets: parallel execution, timeouts, execution budgets, observability, and the ability to nest inside other pathways -- for free. You declare the workflow; the VM handles the execution.


What is AlbusOS?

AlbusOS gives you three things:

  1. Simple agents -- agent() + run() for LLM-with-tools. The on-ramp.
  2. Composable workflows -- PathwayBuilder for multi-step agentic state graphs. The main event.
  3. Multi-agent orchestration -- Handoffs and delegation between specialized agents.
albusos (the framework)                 Your repo (the product)
├── core/           Pathway VM, nodes   ├── skills/       SKILL.md + tools/
├── stdlib/         LLM routing, tools  ├── agents.py     Agent definitions
└── infrastructure/ Sandbox, tools      └── app.py        Your transport (FastAPI, etc.)

AlbusOS handles: Execution engine, LLM routing, tool registry, built-in tools, observability, state management, pathway composition.

Your repo handles: Domain tools, agent configs, workflows, and transport.


Writing Tools

Each tool is a single Python file with an async def run() function:

"""Search for ServiceM8 jobs by status."""

from albusos import ToolOutput


async def run(status: str = "open", limit: int = 20) -> ToolOutput:
    """
    Args:
        status: Job status filter (open, completed, all)
        limit: Maximum results to return
    """
    jobs = await servicem8_api.list_jobs(status=status, limit=limit)
    return ToolOutput(success=True, data={"jobs": jobs})

Place tools inside a skill directory:

skills/
└── servicem8/
    ├── SKILL.md              # Instructions for the agent
    └── tools/
        ├── list_jobs.py      # → servicem8.list_jobs
        ├── create_job.py     # → servicem8.create_job
        └── update_status.py  # → servicem8.update_status

Tools are auto-discovered and named {skill}.{file}. No decorators, no registration, no class hierarchies.


Pathways

Pathways are composable state graphs. agent() uses the built-in tool-calling loop by default. PathwayBuilder lets you compose custom workflows when you need explicit control.

Node types

Type What it does
llm LLM call with optional tool-calling loop
tool Call any registered tool
conditional Branch on a condition
transform Evaluate an expression
handoff Route to another agent
pathway Nest a sub-pathway
checkpoint Pause for human approval
code_execute Run sandboxed Python
stage Stateful workflow stage
loop Iterate over a body

Execution modes

Mode Behavior Use when
dag (default) Parallel, no cycles Pipelines, fan-out/fan-in
stateful Sequential, cycles OK Conversations, human-in-the-loop

Composition

Pathways can nest inside other pathways, enabling modular workflow design:

research = PathwayBuilder("research", pathway_id="research")...build()
summarize = PathwayBuilder("summarize", pathway_id="summarize")...build()

pipeline = (
    PathwayBuilder("full", pathway_id="full")
    .sub_pathway("step1", "research")
    .sub_pathway("step2", "summarize")
    .connect("input", "step1")
    .connect("step1", "step2")
    .connect("step2", "output")
    .build()
)

Architecture

src/
├── albusos/           Public API
│   ├── agent()            One-call agent factory
│   ├── run()              Zero-wiring execution
│   └── Session            Multi-turn conversations
├── core/              Engine
│   ├── runner.py          Session, default pathway, wiring
│   ├── agent.py           Agent runtime + AgentRepository
│   ├── builders/          PathwayBuilder, AgentBuilder, SkillBuilder
│   ├── pathways/          VM, nodes, DAG/stateful schedulers
│   ├── llm/               Provider protocol + capability routing
│   ├── types/             Pydantic models
│   └── protocols/         Interfaces (PathwayVMLike, StateStoreLike)
├── stdlib/            Built-in capabilities
│   ├── llm/               Providers (OpenRouter, Ollama)
│   ├── primitives/        Tools (web, memory, workspace, shell, code)
│   └── bootstrap.py       load_stdlib()
└── infrastructure/    Sandbox, tool loader

Key imports

# Simple agents
from albusos import agent, run, Session

# Custom pathways
from albusos import PathwayBuilder, AgentBuilder, ToolOutput

# Types
from albusos import AgentDefinition, Pathway, ExecutionBudget, ExecutionResult

# Advanced (direct LLM access)
from core.llm import generate, get_provider
from core.llm.providers import ModelCapability, set_runtime_model_config

Built-in Tools

Loaded automatically by agent() and run():

Tool What it does
web.search DuckDuckGo search
web.fetch Fetch a URL
memory.get/set/search Per-agent key-value memory
memory.shared_get/shared_set Cross-agent shared memory
workspace.read_file/write_file/list_files File I/O
shell.execute Run shell commands
code.execute Sandboxed Python execution
code.run_test Run pytest tests
agent.turn/agent.list Multi-agent orchestration

Model Routing

Capability-based model selection -- swap models without changing agent code:

Capability Use for Default
fast Quick tasks, routing openai/gpt-4o-mini
reasoning Complex thinking openai/gpt-4o
code Code generation anthropic/claude-3.5-sonnet
vision Image understanding openai/gpt-4o
local Offline/free llama3.1:8b (Ollama)
# Capability name (recommended) — portable across providers
model = "reasoning"
# Explicit model (when you need a specific one)
model = "openai/gpt-4o"

Override at runtime:

from core.llm.providers import set_runtime_model_config

set_runtime_model_config({"reasoning": "anthropic/claude-sonnet-4"})

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

albusos-0.14.0.tar.gz (172.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

albusos-0.14.0-py3-none-any.whl (237.4 kB view details)

Uploaded Python 3

File details

Details for the file albusos-0.14.0.tar.gz.

File metadata

  • Download URL: albusos-0.14.0.tar.gz
  • Upload date:
  • Size: 172.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for albusos-0.14.0.tar.gz
Algorithm Hash digest
SHA256 0ea80e7d8b285c6c007ef5427e4d86fa618f167bd8dd89f22a517418f0a7dc47
MD5 5c2feb455c0a84acf4b9501c1292a0dc
BLAKE2b-256 86900d205346bc6960eba22ddfe8c128f786bdba46ffce08e7567e771cd00b0a

See more details on using hashes here.

Provenance

The following attestation bundles were made for albusos-0.14.0.tar.gz:

Publisher: deploy.yml on albusOS/AlbusOS

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file albusos-0.14.0-py3-none-any.whl.

File metadata

  • Download URL: albusos-0.14.0-py3-none-any.whl
  • Upload date:
  • Size: 237.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for albusos-0.14.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7f469ee11753a0dadbba3b87c3abe4663c2062191682727046708e9826c23b1c
MD5 d90e7b9fac3889bb877971543932c22d
BLAKE2b-256 6ee86e63dc44d1e51d501ad62f4a66cc683ab79f04e6228e7d8ae787095e4892

See more details on using hashes here.

Provenance

The following attestation bundles were made for albusos-0.14.0-py3-none-any.whl:

Publisher: deploy.yml on albusOS/AlbusOS

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page