ARIA - Adaptive Runtime Intelligence Architecture. Cognitive CLI with live folder-based messaging.
Project description
ARIA - Adaptive Runtime Intelligence Architecture
The cognitive CLI for self-narrating systems
What is ARIA?
ARIA is the cognitive layer that makes systems explain themselves.
It connects to live runtimes, observes their state, and generates natural language explanations of what's happening and why. ARIA turns opaque systems into self-narrating organisms.
┌─────────────────────────────────────────────────────────────┐
│ ARIA │
│ │
│ Observe ───► Predict ───► Act ───► Explain │
│ │
│ "I see Trinity cycling at 847 events/sec. │
│ The Waterwheel is stabilizing incoming data. │
│ ARIA predicts load will spike in 3 minutes." │
│ │
└─────────────────────────────────────────────────────────────┘
Installation
Minimal (CLI only)
pip install aria-cli
With LLM support (local inference)
pip install aria-cli[llm]
With server (REST API + WebSocket)
pip install aria-cli[server]
Full installation (everything)
pip install aria-cli[full]
Quick Start
1. Download a brain
aria brain download tinyllama
2. Start the cognitive server
aria serve --brain tinyllama
3. Explain a system
aria explain --snapshot system-state.json
4. Run a guided tour
aria tour run cognitive-loop.json
5. Record a session
aria session start --name "my-session"
aria session stop
aria session list
Commands
aria brain
Manage LLM brains for cognition.
aria brain list # Show available brains
aria brain download tinyllama # Download TinyLlama (638 MB)
aria brain download phi2 # Download Phi-2 (1.7 GB)
aria brain download qwen2 # Download Qwen2 1.5B (940 MB)
aria brain download llama3 # Download Llama 3.2 1B (770 MB)
aria brain info tinyllama # Show brain details
aria brain benchmark # Benchmark all brains
aria explain
Generate explanations from system state.
aria explain --snapshot state.json # Explain from file
aria explain --url http://localhost:8080 # Explain from live endpoint
aria explain --stdin # Explain from stdin
aria explain --focus node-123 # Focus on specific node
aria explain --style technical # Technical explanation
aria explain --style narrative # Narrative explanation
aria serve
Start the cognitive server.
aria serve # Start with defaults
aria serve --brain phi2 # Use specific brain
aria serve --port 8080 # Custom port
aria serve --host 0.0.0.0 # Bind to all interfaces
aria serve --reload # Auto-reload on changes
aria tour
Run guided cognitive tours.
aria tour list # List available tours
aria tour run cognitive-loop.json # Run a tour
aria tour validate tour.json # Validate tour file
aria tour create --name "My Tour" # Interactive tour creation
aria session
Record cognitive sessions.
aria session start --name "debug-session" # Start recording
aria session stop # Stop recording
aria session list # List all sessions
aria session show session-123 # Show session details
aria session export session-123 --format json
aria session replay session-123 # Replay a session
aria holomap
Manage Holomap world files.
aria holomap validate world.hmap # Validate a holomap
aria holomap diff old.hmap new.hmap # Diff two holomaps
aria holomap stats world.hmap # Show statistics
aria holomap visualize world.hmap # Open visualization
aria watch
Live monitoring with real-time narration.
aria watch http://localhost:8080 # Watch live endpoint
aria watch --interval 5 # Poll every 5 seconds
aria watch --alert "load > 80%" # Alert on condition
Configuration
ARIA uses ~/.aria/config.toml for configuration:
[brain]
default = "tinyllama"
path = "~/.aria/models"
timeout = 30
[server]
host = "127.0.0.1"
port = 7777
[session]
output_dir = "~/.aria/sessions"
format = "jsonl"
compress = false
[style]
theme = "dark"
verbosity = "normal"
Python API
from aria import CognitiveEngine, WorldSnapshot
# Initialize engine
engine = CognitiveEngine(brain="tinyllama")
# Create snapshot
snapshot = WorldSnapshot.from_file("state.json")
# Generate explanation
response = engine.explain(snapshot)
print(response.summary)
# "Trinity Core is cycling at 847 events/second,
# indicating healthy recursive processing."
print(response.focus_nodes)
# ["LenixTrinityEngine", "WaterwheelHub"]
Async API
import asyncio
from aria import AsyncCognitiveEngine
async def main():
engine = AsyncCognitiveEngine(brain="phi2")
async with engine:
response = await engine.explain_async(snapshot)
print(response.summary)
asyncio.run(main())
Session Recording
from aria import SessionRecorder
with SessionRecorder("my-session") as recorder:
# All cognitive events are automatically recorded
response = engine.explain(snapshot)
# Session saved to ~/.aria/sessions/my-session.jsonl
The Cognitive Contract
ARIA implements the Explainer Contract:
Input: WorldSnapshot (nodes, flows, metrics, focus)
Output: ExplainerResponse (summary, details, focus_nodes, confidence)
Every explanation is:
- Grounded — references only nodes/flows that exist
- Bounded — respects token limits
- Deterministic — same input → consistent output
- Traceable — includes confidence and reasoning
Architecture
┌─────────────────────────────────────────────────────────────┐
│ aria-cli │
├─────────────────────────────────────────────────────────────┤
│ CLI Layer │ click + rich │
├─────────────────────────────────────────────────────────────┤
│ Cognitive Engine │ LLM orchestration + prompts │
├─────────────────────────────────────────────────────────────┤
│ Brain Backend │ llama-cpp-python (local) │
│ │ OpenAI API (remote) │
│ │ Ollama (local server) │
├─────────────────────────────────────────────────────────────┤
│ Data Layer │ WorldSnapshot, ExplainerResponse │
├─────────────────────────────────────────────────────────────┤
│ Session Layer │ Recording, replay, lineage │
└─────────────────────────────────────────────────────────────┘
License
Apache 2.0 — Free for commercial use.
Related
"The system that explains itself is the system that can be trusted."
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aria_cli-2.0.0.tar.gz.
File metadata
- Download URL: aria_cli-2.0.0.tar.gz
- Upload date:
- Size: 101.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d49e50d9af569661f1eb293682f78d2321120d7131953d57c87dc03c48ce24c9
|
|
| MD5 |
259a6c88458ec9ffc93d353e3b97c6f1
|
|
| BLAKE2b-256 |
d922e2a12fe5d9fe8f8347fce92a2ca496281414c701be9dc8664a093c5cee35
|
File details
Details for the file aria_cli-2.0.0-py3-none-any.whl.
File metadata
- Download URL: aria_cli-2.0.0-py3-none-any.whl
- Upload date:
- Size: 99.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a4994f940731035d45cfbc7cb01ad25068bd82a33e48b618803f92dfd8f130f9
|
|
| MD5 |
c1ec969d5ce419f28e0d8cd3bfda70ab
|
|
| BLAKE2b-256 |
d17db25c47e8740aca9eae842eaa95e418ff67f85e1941a7732e6dbf65609b71
|