Skip to main content

Governance-first framework for AI agent systems — structured boardroom meetings, rule engine, and mandatory red team review

Project description

English | 日本語

AEGIS

Governance-First Framework for AI Agent Systems

Your AI agents need adult supervision.
Structured boardroom debates. Mandatory red team. Governance guardrails that actually enforce.

License Python Tests PyPI

Quick Start · Why AEGIS? · CLI · GitHub Action · API · Contributing


60-Second Demo

pip install aegis-gov
export ANTHROPIC_API_KEY=sk-...

# Run a governance review on any decision
aegis convene "Should we mass-email all users about the new feature?" --category TACTICAL

# Check an action against governance rules (no LLM needed)
aegis check DevOps deploy --context environment=production tests_passed=true review_approved=false
# → ESCALATE_TO_HUMAN: Production deployment requires passing tests and review approval
from aegis_gov import Boardroom

boardroom = Boardroom()
result = boardroom.convene(
    topic="Should we migrate to microservices?",
    category="STRATEGIC",
    context={"team_size": 5, "current_arch": "monolith"},
)

print(result.synthesis)       # CEO's final decision
print(result.vote_summary)    # {"approve": 7, "conditional": 2, "reject": 0, "abstain": 0}
print(result.confidence)      # 0.85

Why AEGIS?

Every other multi-agent framework helps AI agents do things. AEGIS makes sure they should.

AEGIS CrewAI AutoGen LangGraph MetaGPT
Governance rule engine Yes No No No No
Mandatory red team review Yes No No No No
Constitutional manifesto Yes No No No No
Decision audit trail Yes Partial No No Partial
Verdict enforcement (BLOCK/HALT) Yes No No No No
Human escalation gates Yes Manual Manual Manual Manual
LLM-agnostic Yes Yes Yes Yes No

AEGIS is not a replacement for these frameworks. It's the governance layer you add on top.

Who is this for?

  • Teams deploying AI agents who need accountability and audit trails
  • Compliance-conscious orgs preparing for EU AI Act, NIST AI RMF, ISO 42001
  • Anyone who doesn't want their AI agents making irreversible decisions unsupervised

Features

Boardroom Meetings (6 phases)

17 AI agents with distinct roles debate every decision:

Phase What happens
1. CEO Opening Classify topic (CRITICAL/STRATEGIC/TACTICAL/OPERATIONAL), set format
2. Executive Council 7 C-level perspectives (CEO, CTO, CFO, CRO, CMO, CPO, CDO)
3. Advisory Input 8 specialists contribute domain expertise
4. Critical Review Red Team + reviewers challenge consensus
5. Open Debate Cross-agent discussion
6. CEO Synthesis Final decision with vote tally, confidence score, and action items

Red Team (Non-Optional)

Every decision is stress-tested. The red team cannot be disabled in the default configuration.

  • DevilsAdvocate -- Challenges assumptions, demands evidence, finds hidden risks
  • Skeptic -- Explores alternatives, runs pre-mortem analysis, detects groupthink

Rule Engine (5 built-in rules)

Governance guardrails that enforce, not advise:

from aegis_gov import RuleEngine

engine = RuleEngine()

# Self-review → BLOCK (agents can't review their own work)
engine.evaluate("Agent", "review", {"author": "Agent"})

# Low confidence → FLAG
engine.evaluate("CTO", "approve", {"confidence": 0.3})

# Production deploy without review → ESCALATE_TO_HUMAN
engine.evaluate("DevOps", "deploy", {
    "environment": "production",
    "tests_passed": True,
    "review_approved": False,
})
Verdict Action
PASS Execute normally
FLAG Proceed with caution, log warning
BLOCK Prevent action entirely
ESCALATE_TO_HUMAN Requires human approval
HALT Stop all processes immediately

Governance Manifesto

A constitutional framework (version-controlled, auditable):

  • Human sovereignty -- humans always have final authority
  • Decision categories with TTL and review requirements
  • Role separation -- decision-makers, implementers, and reviewers are distinct
  • Confidence scoring mandatory for all decisions

Quick Start

Option 1: pip install (recommended)

pip install aegis-gov[anthropic]   # or aegis-gov[openai] or aegis-gov[all]
export ANTHROPIC_API_KEY=sk-...

# Generate a starter config (customizable rules + agents)
aegis init

# Run your first boardroom meeting
aegis convene "Should we mass-email all users?" --category TACTICAL

Option 2: Docker

git clone https://github.com/pyonkichi369/aegis-oss.git
cd aegis-oss
cp .env.example .env  # Add your ANTHROPIC_API_KEY
docker compose up
# API at http://localhost:8000/docs

Option 3: From source

git clone https://github.com/pyonkichi369/aegis-oss.git
cd aegis-oss
pip install -e ".[dev]"
aegis convene "Test topic" --category OPERATIONAL

CLI

aegis convene "topic"    Run a full boardroom meeting
aegis review "artifact"  Standalone red team review
aegis check AGENT ACTION Evaluate action against rules
aegis agents             List the agent roster
aegis rules              List governance rules
aegis init               Create starter config
aegis version            Print version

Options for convene:

--category    OPERATIONAL | TACTICAL | STRATEGIC | CRITICAL (default: TACTICAL)
--model       LLM model (default: claude-sonnet-4-6)
--provider    anthropic | openai | ollama (default: anthropic)
--rounds      Debate rounds (default: 2)
--output      json | text (default: text)

GitHub Action

Add AI governance review to your pull requests:

# .github/workflows/aegis-review.yml
name: AEGIS Governance Review
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      - uses: pyonkichi369/aegis-oss@v1
        with:
          api-key: ${{ secrets.ANTHROPIC_API_KEY }}
          category: TACTICAL
          fail-on: BLOCK  # BLOCK | ESCALATE | FLAG | never

The action runs a boardroom review on the PR diff and posts the verdict as a check result.

API

Start the server: uvicorn aegis_gov.api:app --reload

Endpoint Method Description
/health GET Health check (public)
/api/v1/boardroom POST Run a boardroom meeting
/api/v1/review POST Standalone red team review
/api/v1/rules/check POST Evaluate action against rules
/api/v1/rules GET List active governance rules
/api/v1/agents GET List council agents

Authentication: X-API-Key header (set AEGIS_API_KEY env var). Dev mode allows unauthenticated access.

Full docs: http://localhost:8000/docs

Customization

Add Domain-Specific Agents

from aegis_gov import Boardroom, BoardroomConfig, AgentRole

config = BoardroomConfig(
    custom_agents=[
        AgentRole("HIPAAOfficer", "Compliance", "HIPAA, PHI, healthcare data", "reviewer"),
        AgentRole("MLEngineer", "ML Systems", "Model deployment, A/B testing", "specialist"),
    ],
)
boardroom = Boardroom(config)

Add Custom Rules (Python)

Rules use condition expressions evaluated with agent, action, context, and rule variables:

from aegis_gov import RuleEngine

engine = RuleEngine()
engine.add_rule("budget_gate", {
    "name": "Budget Approval",
    "condition": "context.get('amount', 0) > 10000",
    "verdict": "ESCALATE_TO_HUMAN",
    "message": "Spending over $10K needs CFO approval",
})

# Now this triggers the custom rule
result = engine.evaluate("Agent", "purchase", {"amount": 50000})
print(result.final_verdict)  # ESCALATE_TO_HUMAN

Custom Rules from YAML

# my_rules.yaml
rules:
  - id: pii_gate
    name: PII Access Gate
    condition: "context.get('data_type') == 'PII'"
    verdict: ESCALATE_TO_HUMAN
    message: Accessing PII requires privacy review

  - id: after_hours_block
    name: After Hours Deploy Block
    condition: "action == 'deploy' and context.get('hour', 12) >= 22"
    verdict: BLOCK
    message: No deployments after 10pm
engine = RuleEngine(rules_path="my_rules.yaml")

Quick Setup with aegis init

aegis init                    # Creates aegis.yaml with examples
aegis init --output custom.yaml  # Custom output path
# Edit the generated file, then use it:
# engine = RuleEngine(rules_path="aegis.yaml")

Use with Any LLM

# OpenAI
boardroom = Boardroom(BoardroomConfig(provider="openai", model="gpt-4o"))

# Ollama (local — no API key needed)
boardroom = Boardroom(BoardroomConfig(
    provider="ollama",
    model="llama3",  # any model installed in Ollama
))

Architecture

aegis-oss/
├── aegis_gov/
│   ├── council/
│   │   ├── boardroom.py      # 6-phase meeting engine
│   │   ├── rule_engine.py    # 5-verdict governance rules
│   │   ├── schemas.py        # Type-safe data models
│   │   ├── agents.py         # 9 default + 8 specialist agents
│   │   ├── security.py       # Input sanitization & prompt injection defense
│   │   └── prompts/          # Agent system prompts + manifesto
│   ├── api.py                # FastAPI REST (auth, CORS)
│   └── cli.py                # CLI tool (aegis command)
├── action.yml                # GitHub Action definition
├── examples/                 # quick_start, custom_agents, rule_engine_demo
├── tests/                    # 44 tests
├── pyproject.toml            # Package config (aegis-gov)
└── docker/                   # Container setup

Examples

Example What it shows
quick_start.py First boardroom meeting in 10 lines
custom_agents.py Adding healthcare compliance agents
rule_engine_demo.py 4 governance scenarios

Compliance & Standards

AEGIS provides tooling support for:

  • EU AI Act (Article 14: Human oversight of high-risk AI)
  • NIST AI Risk Management Framework (AI RMF 1.0)
  • ISO/IEC 42001 (AI Management Systems)

The audit trail, decision categorization, and human escalation gates map directly to these standards' requirements.

Contributing

We welcome contributions! See CONTRIBUTING.md.

Good first issues:

  • Add agent prompts for new domains (finance, healthcare, legal)
  • Add governance rules for specific compliance frameworks
  • Improve test coverage

License

Apache 2.0 -- see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aegis_gov-0.1.1.tar.gz (303.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aegis_gov-0.1.1-py3-none-any.whl (35.8 kB view details)

Uploaded Python 3

File details

Details for the file aegis_gov-0.1.1.tar.gz.

File metadata

  • Download URL: aegis_gov-0.1.1.tar.gz
  • Upload date:
  • Size: 303.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for aegis_gov-0.1.1.tar.gz
Algorithm Hash digest
SHA256 41c8a81570524d3f905604fd6371e21118367f2ac8d5c951bdf8e5e846ee0113
MD5 2e3f321704dc008ef90ccc701bbd5548
BLAKE2b-256 03c7d1c692e708d1cfa37e082c343e98b9d3b840f22ea6dbff25e9e3847b01af

See more details on using hashes here.

File details

Details for the file aegis_gov-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: aegis_gov-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 35.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for aegis_gov-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9f4177cd68cb160c20ab7f80a6f5559631006b6087609cd15cc58c558700624c
MD5 637ac51efd64b3ee1d0608737f8aec67
BLAKE2b-256 24b9c5a1598cc0ca669f522331c730fda005ba8c59d5ed9498a6d5e30dfd9ffd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page