Skip to main content

AlbusOS - Framework for building multi-agent systems with pathway-based execution

Project description

AlbusOS

Python framework for building agentic workflows as composable state graphs.

pip install albusos

Quick Start

Requires Python 3.13+

pip install albusos
export OPENROUTER_API_KEY="..."   # or OPENAI_API_KEY, or run Ollama locally

Simple agent (LLM + tools loop)

import asyncio
from albusos import agent, run

researcher = agent(
    "researcher",
    instructions="Research topics and provide concise summaries.",
    tools=["web.*", "memory.*"],
)

async def main():
    result = await run(researcher, "What is quantum computing?")
    print(result.response)

asyncio.run(main())

agent() auto-loads tools and LLM providers. run() wires the engine internally. For most single-agent use cases, this is all you need.

Multi-turn conversations

from albusos import agent, Session

researcher = agent("researcher", instructions="Research topics.", tools=["web.*"])

async def main():
    session = Session(researcher)
    r1 = await session.run("What is quantum computing?")
    r2 = await session.run("Tell me more about qubits specifically")
    print(r2.response)  # Full conversation context

asyncio.run(main())

Custom pathways (where the real power is)

When you need explicit multi-step workflows -- branching, chaining tools, routing between agents -- you compose them as executable graphs using PathwayBuilder:

from albusos import PathwayBuilder, AgentBuilder, run

# A triage workflow: lookup → classify → branch → act
triage = (
    PathwayBuilder("triage", pathway_id="triage")
    .tool("lookup", "servicem8.search_customer", args={"query": "{{input.goal}}"})
    .llm("classify", "Classify urgency based on: {{lookup.output}}", model="fast")
    .conditional("check", "{{classify.output.urgency}} == 'high'", "escalate", "standard")
    .llm("escalate", "Create urgent job: {{input.goal}}", tools=["servicem8.*"])
    .llm("standard", "Create standard job: {{input.goal}}", tools=["servicem8.*"])
    .connect("input", "lookup")
    .connect("lookup", "classify")
    .connect("classify", "check")
    .connect("check", "escalate")
    .connect("check", "standard")
    .connect("escalate", "output")
    .connect("standard", "output")
    .build()
)

agent_def = AgentBuilder().id("dispatch").pathway("triage").tool("servicem8.*").build()

async def main():
    result = await run(agent_def, "Toilet overflow at 42 Smith St", pathway=triage)
    print(result.response)

The pathway gets: parallel execution, timeouts, execution budgets, observability, and the ability to nest inside other pathways -- for free. You declare the workflow; the VM handles the execution.

Loading custom tools

from albusos import load_tools, load_skill

# Load a directory of tool scripts (each .py with async def run())
load_tools("skills/servicem8/tools", namespace="servicem8")

# Or load a full skill (SKILL.md + tools/ + auto-registration)
load_skill("skills/servicem8")

What is AlbusOS?

AlbusOS gives you three things:

  1. Simple agents -- agent() + run() for LLM-with-tools. The on-ramp.
  2. Composable workflows -- PathwayBuilder for multi-step agentic state graphs. The main event.
  3. Multi-agent orchestration -- agent.turn and agent.list for routing between specialized agents.
albusos (the framework)                 Your repo (the product)
├── core/           Pathway VM, nodes   ├── skills/       SKILL.md + tools/
├── stdlib/         LLM routing, tools  ├── agents.py     Agent definitions
└── infrastructure/ Sandbox, tools      └── app.py        Your transport (FastAPI, etc.)

AlbusOS handles: Execution engine, LLM routing, tool registry, built-in tools, observability, state management, pathway composition.

Your repo handles: Domain tools, agent configs, workflows, and transport.


Writing Tools

Each tool is a single Python file with an async def run() function:

"""Search for ServiceM8 jobs by status."""

from albusos import ToolOutput


async def run(status: str = "open", limit: int = 20) -> ToolOutput:
    """
    Args:
        status: Job status filter (open, completed, all)
        limit: Maximum results to return
    """
    jobs = await servicem8_api.list_jobs(status=status, limit=limit)
    return ToolOutput(success=True, data={"jobs": jobs})

Place tools inside a skill directory:

skills/
└── servicem8/
    ├── SKILL.md              # Instructions for the agent
    └── tools/
        ├── list_jobs.py      # → servicem8.list_jobs
        ├── create_job.py     # → servicem8.create_job
        └── update_status.py  # → servicem8.update_status

Tools are auto-discovered and named {skill}.{file}. No decorators, no registration, no class hierarchies.


Pathways

Pathways are composable state graphs. agent() uses the built-in tool-calling loop by default. PathwayBuilder lets you compose custom workflows when you need explicit control.

Node types

Type Builder method What it does
input .input() Declare pathway inputs with schema
output .output() Map pathway outputs from upstream nodes
llm .llm() LLM call with optional tool-calling loop
tool .tool() Call any registered tool
conditional .conditional() Branch on a condition (if/else routing)
transform .transform() Evaluate a safe expression
pathway .sub_pathway() Nest a sub-pathway (composition)
code_execute .code_execute() Run sandboxed Python code
loop .loop_node() Iterate body nodes until condition met
stage .stage() Stateful workflow stage with transitions
checkpoint .checkpoint() Pause for human approval / persistence

Execution modes

Mode Behavior Use when
dag (default) Parallel, no cycles Pipelines, fan-out/fan-in
stateful Sequential, cycles OK Conversations, human-in-the-loop

Template expressions

Reference upstream node outputs anywhere with {{node_id.output}} or {{node_id.output.field}}:

.llm("summarize", "Summarize: {{search.output.results}}")
.tool("fetch", "web.fetch", args={"url": "{{input.url}}"})
.conditional("check", "{{classify.output.urgent}} == true", "fast_path", "slow_path")

Composition

Pathways can nest inside other pathways, enabling modular workflow design:

research = PathwayBuilder("research", pathway_id="research").llm("r", "...").build()
summarize = PathwayBuilder("summarize", pathway_id="summarize").llm("s", "...").build()

pipeline = (
    PathwayBuilder("full", pathway_id="full")
    .sub_pathway("step1", research)
    .sub_pathway("step2", summarize)
    .connect("input", "step1")
    .connect("step1", "step2")
    .connect("step2", "output")
    .build()
)

Architecture

src/
├── albusos/           Public API (start here)
│   ├── agent()            One-call agent factory
│   ├── run()              Zero-wiring execution
│   ├── Session            Multi-turn conversations
│   ├── load_tools()       Load custom tool scripts
│   ├── load_skill()       Load a full skill directory
│   └── load_workspace()   Convention-based project discovery
├── core/              Engine (framework internals)
│   ├── runner.py          Session, default pathway, wiring
│   ├── agent.py           Agent runtime + AgentRepository
│   ├── config.py          Pydantic Settings (env vars, .env)
│   ├── builders/          PathwayBuilder, AgentBuilder, SkillBuilder
│   ├── pathways/          VM, nodes, DAG/stateful schedulers
│   ├── llm/               Provider protocol + capability routing + retry
│   ├── types/             Pydantic models (AgentDefinition, etc.)
│   └── protocols/         Interfaces (PathwayVMLike, StateStoreLike)
├── stdlib/            Built-in capabilities
│   ├── primitives/        Tools (web, memory, workspace, shell, code)
│   └── bootstrap.py       load_stdlib() — auto-loads tools + providers
└── infrastructure/    Sandbox, tool loader

Layering rules

  • core/ has zero imports from stdlib/ or albusos/
  • stdlib/ imports from core/ only
  • infrastructure/ imports from core/ only
  • albusos/ imports from core/ and stdlib/

Key imports

# Simple agents
from albusos import agent, run, Session

# Custom pathways
from albusos import PathwayBuilder, AgentBuilder, ToolOutput

# Load custom tools / skills
from albusos import load_tools, load_skill, load_workspace

# Types
from albusos import AgentDefinition, Pathway, PathwayMode, ExecutionBudget, ExecutionResult

# Advanced (direct LLM access)
from core.llm import generate, get_provider
from core.llm.providers import ModelCapability, set_runtime_model_config

Built-in Tools

Loaded automatically by agent() and run():

Tool What it does
web.search DuckDuckGo search
web.fetch Fetch a URL (with HTTP error handling)
memory.get / memory.set / memory.search Per-agent key-value memory
memory.shared_get / memory.shared_set Cross-agent shared memory (atomic writes)
workspace.read_file / workspace.write_file / workspace.list_files File I/O
shell.execute Run shell commands
code.execute Sandboxed Python execution
code.run_test Run pytest tests
agent.turn / agent.list Multi-agent orchestration

Model Routing

Capability-based model selection -- swap models without changing agent code:

Capability Use for Default
fast Quick tasks, routing openai/gpt-4o-mini
reasoning Complex thinking openai/gpt-4o
code Code generation anthropic/claude-3.5-sonnet
vision Image understanding openai/gpt-4o
local Offline/free llama3.1:8b (Ollama)
# Capability name (recommended) — portable across providers
agent("a", model="reasoning")

# Explicit model (when you need a specific one)
agent("a", model="openai/gpt-4o")

Override at runtime via environment or code:

# Environment variables
export ALBUS_MODEL_FAST="anthropic/claude-haiku"
export ALBUS_MODEL_REASONING="anthropic/claude-sonnet-4"
# Runtime code
from core.llm.providers import set_runtime_model_config
set_runtime_model_config({"reasoning": "anthropic/claude-sonnet-4"})

Configuration

AlbusOS uses Pydantic Settings for centralized config. All env vars are read from the environment and .env automatically.

Variable Purpose Default
OPENROUTER_API_KEY OpenRouter API key (200+ models)
OPENAI_API_KEY Direct OpenAI access (bypasses OpenRouter)
OLLAMA_HOST Ollama server URL http://localhost:11434
ALBUS_MODEL_FAST Override fast model openai/gpt-4o-mini
ALBUS_MODEL_REASONING Override reasoning model openai/gpt-4o
ALBUS_MODEL_CODE Override code model anthropic/claude-3.5-sonnet
ALBUS_LLM_MAX_RETRIES LLM retry count (0-10) 3
ALBUS_LLM_RETRY_BASE_DELAY Retry base delay seconds 1.0

See env.example for a complete template.


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

albusos-0.17.1.tar.gz (225.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

albusos-0.17.1-py3-none-any.whl (300.3 kB view details)

Uploaded Python 3

File details

Details for the file albusos-0.17.1.tar.gz.

File metadata

  • Download URL: albusos-0.17.1.tar.gz
  • Upload date:
  • Size: 225.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for albusos-0.17.1.tar.gz
Algorithm Hash digest
SHA256 93b0b6e10f7e0ee447e841cc1d07c444fdfe349f579ae3e06f00c69da7854a32
MD5 8f5f355ae4bafe085b8c44f95c2b88f3
BLAKE2b-256 c8c0fcc32e4f1afea20116915c6d01e7243969f7b4a6fa64f0be1da390f1c0f0

See more details on using hashes here.

Provenance

The following attestation bundles were made for albusos-0.17.1.tar.gz:

Publisher: deploy.yml on albusOS/AlbusOS

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file albusos-0.17.1-py3-none-any.whl.

File metadata

  • Download URL: albusos-0.17.1-py3-none-any.whl
  • Upload date:
  • Size: 300.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for albusos-0.17.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3aa3789f5b17cecf5f7eabe19cdd59ea8b263345aec6b15290c3fbc5a0f11d52
MD5 e4bdb63320546eb1293bfe732fd4aa26
BLAKE2b-256 84839d734ffd8c6289805bc2176ef696eee6814783a0fccf573d6ec4a98df814

See more details on using hashes here.

Provenance

The following attestation bundles were made for albusos-0.17.1-py3-none-any.whl:

Publisher: deploy.yml on albusOS/AlbusOS

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page