Skip to main content

Runtime control layer for stabilizing AI systems and improving behavior without retraining

Project description

Aegis

Runtime control for AI systems.

Aegis sits on top of your AI pipeline and returns structured control decisions that stabilize behavior at runtime without replacing your model, agent, or retrieval system.


Why Aegis

Modern AI systems often fail in subtle but costly ways:

  • inconsistent outputs across similar inputs
  • unstable multi-step reasoning
  • retrieval drift in RAG systems
  • fragile workflow and agent execution

Aegis addresses these problems with runtime control, not retraining, fine-tuning, or model swapping.


Core Idea

Aegis is a control layer, not an execution layer.

from aegis import AegisClient

client = AegisClient(api_key="YOUR_API_KEY")

result = client.auto().llm(...)

Aegis will:

  • detect instability signals
  • select minimal corrective actions
  • return runtime controls and observability data

Aegis does not execute the downstream LLM call for you. It is not a model, not a full execution engine, and not a replacement for LangChain, LangGraph, or your tool stack.


Installation

pip install scelabs-aegis

Get an API Key

curl -X POST https://aegis-backend-production-4b47.up.railway.app/v1/onboard \
  -H "Content-Type: application/json" \
  -d '{"email":"you@example.com"}'

This returns:

  • api_key
  • auto scope URLs (including auto_llm_url, auto_rag_url, auto_step_url, auto_context_url, and auto_agent_url on current backends)
  • example usage

Set Environment

export AEGIS_API_KEY=your_key_here
export AEGIS_BASE_URL=https://aegis-backend-production-4b47.up.railway.app

First Call

from aegis import AegisClient, AegisConfig

client = AegisClient(
    config=AegisConfig(mode="balanced"),
)

result = client.auto().llm(
    base_prompt="You are a careful assistant.",
    input={"user_query": "Explain recursion simply."},
    symptoms=["inconsistent_outputs"],
    severity="medium",
)

print(result.actions)
print(result.explanation)
print(result.scope_data)

Scope-First API

Aegis uses a scope-first runtime interface:

client.auto().llm(...)
client.auto().rag(...)
client.auto().step(...)
client.auto().context(...)
client.auto().agent(...)

These calls map to first-class public backend routes:

  • POST /v1/auto/llm
  • POST /v1/auto/rag
  • POST /v1/auto/step
  • POST /v1/auto/context
  • POST /v1/auto/agent

Scopes

LLM

Use llm when you need stabilization around a direct model call.

result = client.auto().llm(
    base_prompt="You are a careful assistant.",
    input={"user_query": "Explain recursion simply."},
    symptoms=["inconsistent_outputs"],
    severity="medium",
)

RAG

Use rag when instability appears in retrieval plus generation.

result = client.auto().rag(
    query="What changed in the policy?",
    retrieved_context=[
        "Policy updated last week.",
        "Refund window reduced to 14 days."
    ],
    symptoms=["retrieval_drift"],
    severity="medium",
)

What changed in RAG

Aegis no longer treats retrieval as a fixed input.

It controls retrieval behavior at runtime.

The RAG scope now:

  • enforces typed evidence coverage (source, test, support)
  • applies relevant-file protection (never drops critical context)
  • performs selective expansion (not always-on)
  • removes noise without losing required files
  • uses staged retrieval only when ambiguity or gaps are detected
  • applies guided retrieval (intent + plan) only when justified

This is not just ranking or filtering.

Aegis:

  • evaluates the retrieved set
  • diagnoses issues (missing support, ambiguity, distractors)
  • applies minimal corrective actions
  • returns a controlled context for downstream use

How RAG control works

At runtime:

  1. You pass query + retrieved context

  2. Aegis evaluates:

    • missing required evidence
    • role imbalance (source/test/support)
    • distractor pressure
    • ambiguity / multi-branch cases
  3. It decides whether to:

    • keep as-is
    • prune noise
    • expand retrieval
    • run a staged second pass
    • guide retrieval when needed
  4. It enforces relevant-file protection before final selection

Everything is gated and minimal.

No always-on expansion. No blind pruning.


Works with Agentic RAG

Yes.

Aegis sits above your agent system and stabilizes retrieval behavior.

It can:

  • prevent agents from drifting due to poor context
  • enforce evidence requirements before execution
  • reduce retries and replans
  • stabilize multi-step retrieval chains

Aegis does not replace your agents — it makes them more reliable.


Step

Use step when you need stabilization for a workflow or agent step.

result = client.auto().step(
    step_name="coordinator",
    step_input={"task": "resolve ticket"},
    symptoms=["unstable_workflow"],
    severity="medium",
)

Context

Use context to control information state before the next model or workflow action.

result = client.auto().context(
    objective="Prepare the next response context.",
    messages=[
        {"role": "user", "content": "Summarize blockers from this thread."},
        {"role": "assistant", "content": "Draft summary goes here."},
    ],
    tool_results=[
        {"tool": "ticket_lookup", "ok": True, "data": {"id": "T-42", "status": "open"}},
    ],
    constraints=["keep it concise", "cite ticket IDs"],
    severity="medium",
)

context can clean and prioritize messages and tool results so your downstream call receives better state.

Agent

Use agent to control multi-step workflow loops on top of your existing AI pipeline.

result = client.auto().agent(
    goal="Resolve the support ticket safely.",
    steps=[
        {"name": "triage", "input": {"ticket_id": "T-42"}},
        {"name": "propose_resolution", "input": {"channel": "email"}},
    ],
    max_steps=4,
    severity="medium",
)

agent can control multi-step execution, tool-result integration, carry-forward context, and stop/retry/escalation decisions.


What Aegis Returns

Every call returns an AegisResult.

result = client.auto().llm(...)

Key fields

  • actions — interventions Aegis selected
  • trace — structured control trace
  • metrics — runtime signals
  • used_fallback — whether fallback behavior was used
  • explanation — concise rationale
  • scope — llm, rag, step, context, or agent
  • scope_data — scope-specific runtime data

RAG Observability (new)

RAG responses now include richer runtime signals:

Inside scope_data:

  • public_rag_runtime — high-level runtime info
  • retrieval_intent — if guided retrieval was used
  • retrieval_plan — structured retrieval guidance (when triggered)
  • initial_retrieved_chunks — stage 1 candidates
  • stage2_retrieved_chunks — staged retrieval results (if used)
  • before_after_metrics — context quality changes

Inside trace:

  • decision.policy_path includes:

    • expansion score / threshold
    • staged retrieval activation
    • intent / plan activation
  • changes includes:

    • protected chunk IDs
    • relevant-file protection indicators

These are optional but useful for debugging pipeline behavior.


Typical RAG Integration Pattern

result = client.auto().rag(
    query="Why is retry failing?",
    retrieved_context=raw_context,
    symptoms=["retrieval_drift"],
    severity="medium",
)

controlled_context = result.scope_data.get("retrieved_context")
trace = result.trace

print(controlled_context)
print(result.actions)
print(trace)

You apply the returned controlled context in your downstream system.


Example Result Shape

{
  "actions": [...],
  "trace": [...],
  "scope": "rag",
  "scope_data": {
    "retrieved_context": [...],
    "public_rag_runtime": {...},
    "before_after_metrics": {...}
  }
}

Debugging

print(result.debug_summary())
print(result.to_dict())

Useful fields:

print(result.actions)
print(result.explanation)
print(result.trace)
print(result.scope_data)

Configuration

from aegis import AegisConfig

config = AegisConfig(
    mode="balanced",
    max_interventions=3,
    allow_retries=True,
    allow_retrieval_expansion=True,
    allow_context_reduction=True,
    allow_prompt_shaping=True,
    fallback="baseline",
    explain=False,
    emit_trace=False,
    policy=None,
    timeout_ms=30000,
)

Required Request Inputs

For scope calls, severity should be one of: low, medium, high.

Symptoms behavior:

  • llm, rag, and step require explicit symptoms and severity
  • context and agent provide safe defaults for symptoms and severity when omitted

Example:

result = client.auto().llm(
    base_prompt="You are a careful assistant.",
    symptoms=["inconsistent_outputs"],
    severity="medium",
)

Design Principles

  • runtime control over training
  • minimal intervention
  • observable behavior through trace and actions
  • model-agnostic integration

Documentation

Docs in /docs explain:

  • architecture
  • scopes
  • Request Shapes
  • result behavior
  • integration guidance
  • migration and usage patterns

Examples


Status

  • Stable SDK surface
  • Active scopes: llm, rag, step, context, agent
  • Public backend routes aligned to the scope-first contract

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scelabs_aegis-0.4.0.tar.gz (17.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scelabs_aegis-0.4.0-py3-none-any.whl (13.9 kB view details)

Uploaded Python 3

File details

Details for the file scelabs_aegis-0.4.0.tar.gz.

File metadata

  • Download URL: scelabs_aegis-0.4.0.tar.gz
  • Upload date:
  • Size: 17.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for scelabs_aegis-0.4.0.tar.gz
Algorithm Hash digest
SHA256 ced5a81fbdbca48d576994b728c316bd556c81d476104dad8c9ec008a3e464cc
MD5 6560ad698ccbbb62773bf1b9157a23ad
BLAKE2b-256 fbb42a43474c1b1185f1c44ad7c2d4558dbfab1367c3b6cf6113e0fab3a5c875

See more details on using hashes here.

File details

Details for the file scelabs_aegis-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: scelabs_aegis-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 13.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for scelabs_aegis-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d18a56f19304c2dc2865d4cae38556ec7f1fa0ad8a64548193e98b82c7926491
MD5 5057a0c1f468398bd7711584a201fa2e
BLAKE2b-256 d7b5c2eb6e15de642880a9b37a2b5f9d7d468217df6c8251f6df601ce7b194e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page