Skip to main content

The Kernel for Durable, Declarative AI Agents.

Project description

Universal Agent Architecture

A durable, event-driven runtime for sovereign AI agents.

CI PyPI Python License OTel


What is UAA?

UAA is an Agent Operating System—a runtime kernel that provides the deterministic, durable execution layer for non-deterministic AI agents.

Most agent frameworks conflate what an agent does with how it runs. UAA separates these concerns:

Layer Responsibility Analog
Manifest Declares graphs, tools, policies Kubernetes YAML
Runtime Executes state machines, persists checkpoints Linux Kernel
Adapters Translates to external systems Device Drivers

This separation means you can define an agent once and run it anywhere—on a local process, as an AWS Step Function, or inside a LangGraph application—without changing your core logic.

Provider-agnostic by design. The kernel has no opinion on which LLM you use. Plug in OpenAI, Anthropic, Bedrock, Ollama, or your own fine-tuned model. The runtime doesn't care; it just executes the graph.


The Five Pillars

UAA is built on five core abstractions. If you understand these, you understand the system.

Abstraction Role Distributed Systems Analog
Graph Declarative state machine defining agent topology Kubernetes Operator / Temporal Workflow
Task Durable work unit with checkpoint persistence Celery Job / systemd unit
Router Decision layer for model and tool selection API Gateway / Load Balancer
Tools Protocol-agnostic capability interface gRPC Service / Unix Tool
Observer Structured telemetry for every state transition OpenTelemetry / Prometheus

Graph (The OS)

The graph is a directed state machine. Nodes are typed (router, tool, human), edges are conditional, and the entire structure is serializable. Execution can pause at any node and resume later—even on a different machine.

Task (The Process)

A task is a running instance of a graph. It holds the execution pointer, the accumulated context, and the full step history. Tasks are persisted to a pluggable store (SQLite, Postgres, DynamoDB), enabling crash recovery and distributed execution.

Router (The Brain)

Routers encapsulate decision-making. They hydrate prompts, select models, and determine which tools to expose. The router abstraction isolates your business logic from the mechanics of LLM invocation.

Tools (The Hands)

Tools are capabilities. UAA supports multiple protocols out of the box:

  • MCP (Model Context Protocol) for inter-agent communication
  • HTTP for REST APIs
  • Local for Python functions
  • Subprocess for CLI tools

Observer (The Eyes)

Every step emits structured telemetry. The default sink is OpenTelemetry, producing distributed traces that flow to Jaeger, Honeycomb, or Datadog without code changes.


Architecture

flowchart TB
    subgraph Runtime["UAA Runtime"]
        API[REST API] --> Engine[Graph Engine]
        Engine --> |dispatch| Handlers{Node Handlers}
        Handlers --> RouterH[Router Handler]
        Handlers --> ToolH[Tool Handler]
        Handlers --> HumanH[Human Handler]
        
        RouterH --> LLM[LLM Client]
        ToolH --> Executor[Tool Executor]
        
        Engine --> |persist| Store[(Task Store)]
        Engine -.-> |emit| Observer[OTel Sink]
    end
    
    subgraph External["External Systems"]
        LLM --> OpenAI[OpenAI / Anthropic / Bedrock]
        Executor --> MCP[MCP Servers]
        Executor --> HTTP[HTTP APIs]
        Observer -.-> Jaeger[Jaeger / Honeycomb]
        Store --> DB[(Postgres / SQLite)]
    end

Ecosystem

UAA is the Kernel. It does not operate alone.

Repository Role Metaphor
universal_agent_architecture Runtime execution, state management, durability Linux Kernel
universal_agent_fabric Roles, domains, policies → compiled manifests Userland / distro
universal_agent_nexus Adapters for AWS, LangGraph, MCP, Kubernetes Network / drivers

The Kernel provides primitives. The Fabric provides opinions. The Nexus provides portability.


Quick Start

Install

pip install universal-agent-arch

Define a Manifest

# manifest.yaml
name: "hello-agent"
version: "0.1.0"

graphs:
  - name: "main"
    entry_node: "router"
    nodes:
      - id: "router"
        kind: "router"
        router: { name: "primary" }
      - id: "respond"
        kind: "tool"
        tool: { name: "echo" }
    edges:
      - from_node: "router"
        to_node: "respond"
        condition: { trigger: "success" }

routers:
  - name: "primary"
    strategy: "llm"
    system_message: "You are a helpful assistant."

tools:
  - name: "echo"
    protocol: "local"
    description: "Returns the input unchanged."

Boot the Runtime

uvicorn universal_agent.runtime.api:app --reload

Trigger an Execution

curl -X POST "http://localhost:8000/graphs/main/executions" \
  -H "Content-Type: application/json" \
  -d '{"input": {"query": "Hello, world"}}'

The runtime will:

  1. Load the manifest
  2. Initialize the graph
  3. Execute the router node (LLM call)
  4. Transition to respond node (tool call)
  5. Persist the final state
  6. Emit telemetry

Why UAA?

The Problem with Chains

Most agent frameworks are built around sequential chains. Chains are simple, but they're also brittle:

  • No durability. If the process dies, you start over.
  • No observability. You get logs, maybe. Structured traces? Rarely.
  • No governance. Every agent is a snowflake with ad-hoc safety checks.

The Graph Alternative

UAA models agents as state machines, not call chains. This unlocks:

Capability How UAA Delivers It
Durability Every state transition is checkpointed. Crash? Resume from the last successful step.
Human-in-the-Loop Graphs can suspend at human nodes, await approval, and resume asynchronously.
Policy Enforcement Governance rules are evaluated before tool execution, not after.
Distributed Tracing Every node is a span. Context propagates across suspend/resume boundaries.

Sovereignty

UAA is not a managed service. You own:

  • The Memory. Task state lives in your database.
  • The Policy. Governance rules are code you control.
  • The Compute. Run on your laptop, your cloud, your air-gapped datacenter.

Extension Points

The kernel is built on interfaces, not implementations. Every external dependency is injectable.

Environment Variable Interface Default
UAA_TASK_STORE ITaskStore SQLTaskStore
UAA_TASK_QUEUE ITaskQueue InMemoryTaskQueue
UAA_LLM_CLIENT BaseLLMClient MockLLMClient
UAA_TOOL_EXECUTOR_LOCAL IToolExecutor MockToolExecutor
UAA_TOOL_EXECUTOR_MCP IToolExecutor MockToolExecutor

To swap implementations:

UAA_LLM_CLIENT=mycompany.adapters.AnthropicClient \
UAA_TASK_STORE=mycompany.adapters.DynamoTaskStore \
uvicorn universal_agent.runtime.api:app

The kernel remains unchanged. You only implement interfaces.


Directory Structure

universal_agent_architecture/
├── universal_agent/          # Core kernel
│   ├── graph/                # State machine engine
│   │   ├── engine.py         # Execution loop
│   │   ├── model.py          # Graph/Node/Edge structures
│   │   └── state.py          # GraphState, StepRecord
│   ├── task/                 # Durability layer
│   │   ├── store.py          # ITaskStore implementations
│   │   └── queue.py          # ITaskQueue implementations
│   ├── router/               # Decision layer
│   ├── tools/                # Capability registry
│   ├── policy/               # Governance engine
│   ├── memory/               # Context management
│   ├── observer/             # Telemetry sinks
│   ├── manifests/            # Schema and loader
│   ├── runtime/              # API and handlers
│   │   ├── api.py            # FastAPI endpoints
│   │   ├── handlers.py       # RouterHandler, ToolHandler
│   │   └── config.py         # DI configuration
│   └── contracts.py          # Public interfaces
├── adapters/                 # Protocol bridges (MCP, etc.)
├── tests/                    # Unit and conformance tests
└── infra/                    # Docker, Terraform

Running with Docker

# Start the full stack (API + Postgres + Jaeger)
docker-compose -f infra/docker-compose.yml up -d

# Access points:
# - API:    http://localhost:8000/docs
# - Jaeger: http://localhost:16686

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/improvement)
  3. Write tests for your changes
  4. Ensure CI passes (pytest tests/ -v)
  5. Submit a pull request

Please read CONTRIBUTING.md for code style and commit message conventions.


License

MIT License. See LICENSE for details.


Built for engineers who ship agents to production.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

universal_agent_arch-0.2.0.tar.gz (32.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

universal_agent_arch-0.2.0-py3-none-any.whl (46.7 kB view details)

Uploaded Python 3

File details

Details for the file universal_agent_arch-0.2.0.tar.gz.

File metadata

  • Download URL: universal_agent_arch-0.2.0.tar.gz
  • Upload date:
  • Size: 32.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for universal_agent_arch-0.2.0.tar.gz
Algorithm Hash digest
SHA256 b53865f7e2db769bf090994a99fb99d8ec08238b305ac0e947dd64d56cc2add7
MD5 a643baebfa94a6576fc96caa5b4a152e
BLAKE2b-256 2e0a3bc092fa39f49f42ddccb5b00fbecd14f092e0b6618774aade58ac204414

See more details on using hashes here.

File details

Details for the file universal_agent_arch-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for universal_agent_arch-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c3a02bc0d686299ab8710e1e3e540c53015dd9e87de5c6e5474edb63b692f966
MD5 a7aec80f34ec6dbbfb46cad72b62a19f
BLAKE2b-256 aa2626668f8eb631f68ad2f161ae812e3dc18fc8c54dbdf253ac4f7273a1bec6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page