Skip to main content

AlbusOS - AI agent runtime with pathway-based execution and composable skills

Project description

AlbusOS

AI agent runtime framework. Install it, bring your own domain.

pip install albusos

What is AlbusOS?

AlbusOS is a domain-agnostic agent execution framework. It provides the runtime, pathway VM, and transport layer. You provide the domain content — agents, skills, tools, and pathways specific to your problem.

albusOS (the framework)                Your repo (the domain)
├── core/        Pathways VM, nodes    ├── skills/      SKILL.md + tools/
├── stdlib/      LLM routing, tools    ├── pathways/    Workflow YAMLs
├── server/      HTTP, CLI, loaders    └── albus.yaml   Deployment config
├── infrastructure/  Sandbox, MCP
└── persistence/ State store

albusOS handles: Runtime, pathway VM, LLM provider routing, core tools (memory, web, workspace, shell, code execution), observability, state management, HTTP/WebSocket/CLI transport.

Your repo handles: Domain-specific agents, skills, tools, and pathway workflows.


Quick Start

Requires Python 3.13+

From scratch

mkdir my-agent-service && cd my-agent-service
pip install albusos
albus init

# Add your API key
cp env.example .env
# Edit .env: OPENROUTER_API_KEY=sk-or-...

# Try it
albus chat "Hello, what can you do?"

From an existing Python codebase

If you already have a Python service (FastAPI, MCP server, CLI tools), scan it to auto-generate skills:

pip install albusos
albus init --scan /path/to/your/python/code

This reads your codebase via AST (no execution), detects tool-like functions, and generates skills/<namespace>/tools/*.py wrappers plus an albus.yaml with all discovered tools wired up.


Development Workflow

AlbusOS follows a design → develop → deploy workflow.

1. Design — scan and discover

albus init --scan /path/to/client/code    # Generate skills from existing code
albus list tools                          # See what was discovered

2. Develop — build with the fluent API

The fluent API is the primary authoring experience. Write pathways and agents in Python, test them, iterate.

from albusos import PathwayBuilder, AgentBuilder, ToolOutput

# Build a pathway — validated at construction time
dispatch = (
    PathwayBuilder("dispatch", pathway_id="dispatch")
    .stateful()
    .tool("fetch_jobs", "servicem8.list_jobs")
    .tool("check_schedule", "servicem8.check_schedule")
    .llm("rank", "Rank technicians by fit for: {{fetch_jobs.output}}")
    .connect("input", "fetch_jobs")
    .connect("fetch_jobs", "check_schedule")
    .connect("check_schedule", "rank")
    .connect("rank", "output")
    .build()
)

# Build an agent
dispatch_agent = (
    AgentBuilder()
    .id("dispatch")
    .name("Dispatch Agent")
    .instructions("Schedule technicians to jobs based on skills and availability.")
    .tool("servicem8.*")
    .tool("memory.*")
    .pathway("dispatch")
    .model("reasoning")
    .build()
)

3. Deploy — compile to YAML, ship

# Export pathway to YAML for deployment
dispatch.to_yaml()  # → pathways/dispatch.yaml

The client receives albus.yaml + skills/ + pathways/ and runs:

albus server    # HTTP + WebSocket server, ready to go

Writing Tools

Each tool is a single Python file with an async def run() function:

"""Discover available ServiceM8 resources."""

from albusos import ToolOutput


async def run(context=None) -> ToolOutput:
    """List all available API resources.

    Args: (none)
    """
    resources = await fetch_resources()
    return ToolOutput(success=True, data={"resources": resources})

Place tools inside a skill directory:

skills/
└── servicem8/
    ├── SKILL.md              # Instructions for the agent
    └── tools/
        ├── discovery.py      # → servicem8.discovery
        ├── crud.py           # → servicem8.crud
        └── triage.py         # → servicem8.triage

Tools are auto-discovered and named {skill}.{file}. No decorators, no registration, no class hierarchies.


Configuring Agents

In albus.yaml (the deployment config):

agents:
  - id: host
    name: ServiceM8 Assistant
    tools: ["servicem8.*", "memory.*", "web.*"]
    pathway: react
    max_steps: 10
    model: reasoning

Or programmatically via the fluent API (see Development Workflow above).


Pathways

Pathways are the composition language — DAGs or stateful graphs of nodes. You write pathways for your domain workflows. That's the product.

Building pathways

Use the fluent API for authoring:

from albusos import PathwayBuilder

pipeline = (
    PathwayBuilder("research")
    .tool("search", "web.search", args={"query": "{{input.query}}"})
    .llm("summarize", "Summarize: {{search.output}}", model="reasoning")
    .connect("input", "search")
    .connect("search", "summarize")
    .connect("summarize", "output")
    .build()
)

Or write YAML directly for deployment:

# pathways/dispatch.yaml
name: dispatch
mode: stateful
nodes:
  - id: fetch_jobs
    type: tool
    config: { tool: servicem8.list_jobs }
  - id: rank_candidates
    type: llm
    config: { prompt: "Rank technicians by fit..." }
  - id: propose
    type: checkpoint
    config: { require_approval: true }
  - id: assign
    type: tool
    config: { tool: servicem8.assign_job }
connections:
  - { from: fetch_jobs, to: rank_candidates }
  - { from: rank_candidates, to: propose }
  - { from: propose, to: assign }

Node types

Type Purpose Example
input Declare pathway inputs {"type": "input", "config": {"input_schema": {...}}}
output Map pathway outputs {"type": "output", "config": {"mapping": {...}}}
tool Call any registered tool {"type": "tool", "config": {"tool": "web.search", "args": {...}}}
llm Direct LLM call {"type": "llm", "config": {"prompt": "..."}}
agent Agent-in-a-box {"type": "agent", "config": {"goal": "...", "tools": [...]}}
pathway Nested sub-pathway {"type": "pathway", "config": {"pathway_id": "research"}}
checkpoint Human-in-the-loop {"type": "checkpoint", "config": {"require_approval": true}}
transform Expression evaluation {"type": "transform", "config": {"expr": "a + b"}}
conditional Branching logic {"type": "conditional", "config": {"condition": "..."}}
handoff Route to another agent {"type": "handoff", "config": {"target_agent": "..."}}
stage Group nodes into phases {"type": "stage", "config": {"name": "planning"}}
loop Iterate over items {"type": "loop", "config": {"max_iterations": 5}}

Execution modes

Mode Behavior Use when
dag (default) Parallel, no cycles Data pipelines, fan-out/fan-in
stateful Sequential, cycles OK Conversations, human-in-the-loop

Architecture

Package structure

src/
├── albusos/           Public API (from albusos import ...)
├── core/              Pathways, nodes, protocols, types, state
│   ├── pathways/          Pathway VM, nodes, session, store
│   ├── builders/          Fluent builders (PathwayBuilder, AgentBuilder, SkillBuilder)
│   ├── types/             Pydantic models (AgentDefinition, ExecutionBudget, etc.)
│   ├── protocols/         Abstract interfaces (PathwayVMLike, StateStoreLike)
│   ├── registry.py        Unified Registry (ToolRegistry + SkillRegistry)
│   └── agent.py           Agent (runtime entity) + AgentRepository
├── stdlib/            Standard library
│   ├── llm/               LLM providers (OpenAI, OpenRouter, Ollama)
│   ├── primitives/        Built-in tools (loaded via load_stdlib())
│   │   ├── tools/         web, workspace, memory, shell
│   │   ├── orchestration/ agent.turn, agent.list
│   │   └── code/          code.execute, code.run_test
│   └── bootstrap.py       load_stdlib() — registers primitives into ToolRegistry
├── server/            Server infrastructure
│   ├── runtime.py         Runtime — server factory, turn execution, component wiring
│   ├── config/            Deployment config, env resolution
│   ├── loaders/           Load agents, skills, tools, pathways from disk
│   └── transport/         HTTP server, WebSocket (studio), A2A, CLI
├── infrastructure/    Sandbox (Python runner), MCP client, tool loader
└── persistence/       File-based state store

Key imports

# Development API (authoring agents, pathways, tools)
from albusos import PathwayBuilder, AgentBuilder, SkillBuilder
from albusos import AgentDefinition, Pathway, PathwayMode
from albusos import ToolOutput, ToolRegistry, get_tool_registry, load_stdlib

# Runtime API (executing agents and pathways)
from albusos import chat, run, create_server
from albusos import ExecutionResult, ExecutionBudget, ThreadState

# Direct module access (when you need internals)
from server.runtime import Runtime
from core.builders import PathwayBuilder, AgentBuilder, SkillBuilder
from stdlib.llm.routing import get_model_for_capability, ModelCapability

Model Routing

Capability-based model selection via config/models.yaml or runtime config:

Capability Use for Default
fast Quick tasks, routing openai/gpt-4o-mini
reasoning Complex thinking anthropic/claude-sonnet-4
code Code generation anthropic/claude-sonnet-4
vision Image understanding openai/gpt-4o
local Offline/free llama3.1:8b (Ollama)
# In agent configs or pathway nodes
model = "reasoning"        # Capability name (recommended)
model = "openai/gpt-4o"   # Explicit model (OpenRouter format)

CLI

albus init                          # Initialize workspace
albus init --scan /path/to/code     # Scan codebase, generate skills
albus chat                          # Interactive agent chat
albus chat "do something"           # One-shot agent turn
albus run pathway.yaml              # Execute a pathway
albus server                        # Start HTTP + WebSocket server
albus list tools                    # List registered tools
albus list skills                   # List loaded skills
albus pathway validate file.yaml    # Validate pathway structure
albus pathway new my-workflow       # Scaffold a new pathway
albus tool run web.search --query "test"   # Run a tool directly

API Endpoints

Endpoint Description
GET /api/v1/health Health check
GET /api/v1/tools List tools
POST /api/v1/tools/{name} Execute a tool
GET /api/v1/agents List agents
POST /api/v1/agents/{id}/turn Run agent turn
GET /api/v1/pathways List pathways
GET /api/v1/pathways/{id} Get pathway details
POST /api/v1/pathways/{id}/run Run pathway
GET /api/v1/pathways/{id}/export Export pathway JSON
GET /api/v1/pathways/{id}/graph Pathway graph structure

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

albusos-0.11.2.tar.gz (275.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

albusos-0.11.2-py3-none-any.whl (375.4 kB view details)

Uploaded Python 3

File details

Details for the file albusos-0.11.2.tar.gz.

File metadata

  • Download URL: albusos-0.11.2.tar.gz
  • Upload date:
  • Size: 275.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for albusos-0.11.2.tar.gz
Algorithm Hash digest
SHA256 986ffa17c63780581a65d0494077afd0be1edac42755584d07d1197bb853ae1f
MD5 d64da666ff0b2780b45b610006a3f09d
BLAKE2b-256 ace374679aa76582f4171ebb3b89e07668c33852f347fbb0bb1eb6d6371e7f60

See more details on using hashes here.

Provenance

The following attestation bundles were made for albusos-0.11.2.tar.gz:

Publisher: deploy.yml on albusOS/AlbusOS

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file albusos-0.11.2-py3-none-any.whl.

File metadata

  • Download URL: albusos-0.11.2-py3-none-any.whl
  • Upload date:
  • Size: 375.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for albusos-0.11.2-py3-none-any.whl
Algorithm Hash digest
SHA256 186f278b3642215b7d5c1ee98e2f9a6a2aa4dd541cbe4b616f177b1a352ca0e9
MD5 aa18177c8bfabe5c8bd504c3c5e45d4a
BLAKE2b-256 f8e16b6453844fac93c3a517ecf031f0ef05a0f9f15f5f3e807101ed3755f577

See more details on using hashes here.

Provenance

The following attestation bundles were made for albusos-0.11.2-py3-none-any.whl:

Publisher: deploy.yml on albusOS/AlbusOS

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page