Skip to main content

Æon Framework - The Neuro-Symbolic Runtime for Safety-Critical Distributed Agents

Project description

Æon Framework (Core)

Version Python Versions License Status Architecture UV Compatible Code style: black

English | Português

The Deterministic Runtime for Safety-Critical AI Agents with Autonomous Native Capabilities

🌟 Overview

Æon is a comprehensive, production-ready framework for building Neuro-Symbolic AI agents. Unlike stochastic-only systems, Æon combines the intuitive reasoning of LLMs (System 1) with the deterministic safety and control of code-level axioms (System 2).

It establishes a standard "Trust Stack" that enables agents to be Safety-Native, Protocol-First, and Extensible by Design. With deep integration of the Agent-to-Agent (A2A) and Model Context Protocol (MCP), Æon allows you to build interoperable agent ecosystems that can collaborate safely in high-stakes environments.

📋 What's New in v0.4.0 (ULTRA)

  • 🔌 Autonomous Native Engine: Built-in support for browser automation (Playwright), persistent event-sourced memory (SQLite), and granular Trust Levels.
  • 🏗️ Developer First CLI: Transform from scripts to projects with the new aeon command. Scaffold, run, and serve agents in seconds.
  • 🚀 Declarative Runtime: Define agents via aeon.yaml and launch a full Gateway Server for production deployments.
  • 🛡️ Enhanced Safety executive: Improved SIL-4 compliant axioms with TMR (Triple Modular Redundancy) reasoning for mission-critical reliability.
  • 🔄 Deep Persistence: Event-sourced memory system that survives reboots and provides a complete audit trail of agent thoughts and actions.
  • ⏰ Temporal Capabilities: Native scheduling for cron jobs and delayed tasks, enabling agents to act autonomously over time.

📋 What's New in v0.3.0 (ULTRA Phase)

  • Routing Layer: Intelligent pattern-based message routing with 5 distinct strategies (Priority, Weighted, etc.).
  • Gateway Layer: Centralized communication hub with session management and TTL support.
  • Security Layer: Policy-based access control, AES encryption, and multi-provider authentication.
  • Health Layer: Real-time system monitoring, metrics collection (Counter, Gauge, etc.), and diagnostics.

✨ Why Choose Æon?

  • Deterministic Safety: Stop begging the model to be safe. Enforce safety at the runtime level with Axioms.
  • Neuro-Symbolic Core: The perfect balance between LLM intuition and hard-coded rules.
  • Protocol-First: Native support for A2A (Agent-to-Agent) and MCP (Model Context Protocol).
  • Enterprise Ready: Built with observability, economics (cost tracking), and health monitoring from the ground up.
  • Local-First & Private: Run entirely on your hardware with Ollama or connect to premium cloud providers.
  • Stark visual Feedback: Terminal-native UI components for monitoring agent execution in real-time.

📦 Installation

Using UV (Recommended)

UV is the fastest way to manage Æon dependencies:

# Clone the repository
git clone https://github.com/richardsonlima/aeon-core.git
cd aeon-core

# Create environment and install
uv sync

Using pip

pip install aeon-core

🚀 Quick Start Examples

1. Developer Workflow (CLI)

From zero to agent in three commands:

# Initialize a new project
aeon init my-safety-agent

# Configure your model in aeon.yaml
# (Default: google/gemini-2.0-flash-001)

# Run a task interactively
aeon run "Check reactor thermal status"

# Start the production gateway
aeon serve --port 8000

2. Create a Safety-Native Agent (Code)

from aeon import Agent
from aeon.protocols import A2A, MCP

# Initialize the agent with the Trust Stack
agent = Agent(
    name="Sentinel",
    model="google/gemini-2.0-flash-001",
    protocols=[A2A(port=8000), MCP(servers=["industrial_tools.py"])]
)

# Define an Unbreakable Axiom (System 2)
@agent.axiom(on_violation="OVERRIDE")
def safety_limit(command: dict) -> bool | dict:
    """SAFETY RULE: Power output cannot exceed 100%."""
    if command.get("power", 0) > 100:
        return {"power": 100, "warning": "AXIOM_LIMIT_REACHED"}
    return True

if __name__ == "__main__":
    agent.start()

3. Autonomous Browser Workflow

from aeon import Agent
from aeon.core.config import TrustLevel

agent = Agent(name="Researcher", trust_level=TrustLevel.FULL)

async def main():
    # Agent can autonomously browse and remember
    response = await agent.run("Find the latest paper on SIL-4 safety and save the summary.")
    print(f"Agent Action: {response.last_thought}")

# Run via CLI: aeon run ...

🔌 Enhanced MCP (Model Context Protocol) v2.0

Æon now features a completely redesigned MCP implementation that provides robust, production-ready integration with external tools:

  • Synapse Layer: Unified tool discovery and invocation.
  • Standard Support: Full compliance with the latest MCP specification.
  • Multi-Server: Connect to multiple MCP servers simultaneously (Stdio, SSE).
  • Type Safety: Automatic parameter validation for tool calls.

📖 Architecture: The 16 Subsystems

Æon is organized into 4 distinct layers, each providing critical functionality for advanced agents:

1. CORE (System 1 & 2)

  • Cortex: Neuro-reasoning via LLMs.
  • Executive: Deterministic control via Axioms.
  • Hive: Standardized communication (A2A).
  • Synapse: Tool integration (MCP).

2. INTEGRATION

  • Integrations: Multi-platform connectivity (Telegram, Discord, Slack).
  • Extensions: Dynamic capability loading.
  • Dialogue: Persistent, event-sourced conversation history.
  • Dispatcher: Event-driven pub/sub architecture.
  • Automation: Temporal task scheduling (Cron/Interval).

3. ADVANCED

  • Observability: Life-cycle hooks and audit trails.
  • Economics: Real-time token tracking and cost calculation.
  • CLI: Premium developer interface.

4. ULTRA (Enterprise)

  • Routing: High-performance message distribution.
  • Gateway: Centralized session and transport management.
  • Security: Authentication, authorization, and encryption.
  • Health: System diagnostics and metrics.

🧪 Hello World: Industrial Overseer

from aeon import Agent
from aeon.protocols import A2A, MCP

controller = Agent(
    name="Reactor_Overseer_01",
    role="Industrial Automation Monitor",
    model="gemini-1.5-flash",
    protocols=[
        A2A(port=8000),
        MCP(servers=["mcp-server-industrial"])
    ]
)

@controller.axiom(on_violation="REJECT")
def enforce_safety(command: dict):
    # Any command attempting to disable cooling is rejected
    if command.get("action") == "DISABLE_COOLING":
        return False
    return True

if __name__ == "__main__":
    controller.start()

🖥 Terminal Output (Visual Feedback)

🚀 Æon Core v0.4.0-ULTRA initialized
├── 📡 A2A Server: Online at http://0.0.0.0:8000 (Unified Standard)
├── 🔌 MCP Client: Connected (4 tools loaded: read_sensor, adjust_valve...)
├── 🛡️ Axioms: 2 Active (enforce_safety, thermal_limit)
└── 🧠 Brain: Gemini-2.0-Flash (Ready)

🤝 Community & Support

📝 Citing this Project

If you use Æon in your research, please cite it as:

@software{richardsonlima-aeon-framework,
  author = {LIMA, Richardson Edson de},
  title = {Aeon Framework: The Neuro-Symbolic Runtime for Deterministic AI Agents},
  url = {https://github.com/richardsonlima/aeon-core},
  version = {0.4.0-ULTRA},
  year = {2026},
}

👨💻 Author

**Richardson Lima (Rick) **

📄 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Made with ❤️ for AI Safety by Richardson Lima.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aeon_core-0.4.0.tar.gz (203.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aeon_core-0.4.0-py3-none-any.whl (102.4 kB view details)

Uploaded Python 3

File details

Details for the file aeon_core-0.4.0.tar.gz.

File metadata

  • Download URL: aeon_core-0.4.0.tar.gz
  • Upload date:
  • Size: 203.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for aeon_core-0.4.0.tar.gz
Algorithm Hash digest
SHA256 62d4e556fafd490eceb69ec9076f5296205644ee77ff1c6e71899f5af2bb598d
MD5 17ca9eea4eef08f493b1d1024f8a4f61
BLAKE2b-256 e578b20b990d4d5d2cd8f9c3dfcf4ef232209755cb55f83a66885158c00ba247

See more details on using hashes here.

File details

Details for the file aeon_core-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: aeon_core-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 102.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for aeon_core-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c4a2f9eac0f2bca2f507f49ba34457ea92dc48cd48bbf904f6954cdd300f9dec
MD5 443957d64307548aecb9021b15fdea7b
BLAKE2b-256 5fd592d24b5d133d0c30a6bdbf31cfc4f69bc6382d9eb8da64a2b29867a21e6e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page