Runtime control layer for stabilizing AI systems and improving behavior without retraining
Project description
Aegis Client
Runtime control for AI systems.
Aegis sits on top of your AI pipeline and returns structured control decisions that stabilize behavior at runtime without replacing your model, agent, or retrieval system.
Why Aegis
Modern AI systems often fail in subtle but costly ways:
- inconsistent outputs across similar inputs
- unstable multi-step reasoning
- retrieval drift in RAG systems
- fragile workflow and agent execution
Aegis addresses these problems with runtime stabilization, not retraining, fine-tuning, or model swapping.
Core Idea
Aegis is a control layer, not an execution layer.
from aegis import AegisClient
client = AegisClient(api_key="YOUR_API_KEY")
result = client.auto().llm(...)
Aegis will:
- detect instability signals
- select corrective actions
- return runtime controls and observability data
Aegis does not execute the downstream LLM call for you.
Installation
pip install scelabs-aegis
Hosted or Self-Hosted
You can use Aegis through the hosted API or against your own backend deployment.
Get an API Key
Hosted
curl -X POST https://aegis-backend-production-4b47.up.railway.app/v1/onboard \
-H "Content-Type: application/json" \
-d '{"email":"you@example.com"}'
Local
curl -X POST http://localhost:8000/v1/onboard \
-H "Content-Type: application/json" \
-d '{"email":"you@example.com"}'
This returns:
api_keyauto_llm_urlauto_rag_urlauto_step_url- example usage
Set Environment
Hosted
export AEGIS_API_KEY=your_key_here
export AEGIS_BASE_URL=https://aegis-backend-production-4b47.up.railway.app
Local
export AEGIS_API_KEY=your_key_here
export AEGIS_BASE_URL=http://localhost:8000
First Call
from aegis import AegisClient, AegisConfig
client = AegisClient(
config=AegisConfig(mode="balanced"),
)
result = client.auto().llm(
base_prompt="You are a careful assistant.",
input={"user_query": "Explain recursion simply."},
symptoms=["inconsistent_outputs"],
severity="medium",
)
print(result.actions)
print(result.explanation)
print(result.scope_data)
Scope-First API
Aegis uses a scope-first runtime interface:
client.auto().llm(...)
client.auto().rag(...)
client.auto().step(...)
These calls map to first-class public backend routes:
- POST /v1/auto/llm
- POST /v1/auto/rag
- POST /v1/auto/step
Scopes
LLM
Use llm when you need stabilization around a direct model call.
result = client.auto().llm(
base_prompt="You are a careful assistant.",
input={"user_query": "Explain recursion simply."},
symptoms=["inconsistent_outputs"],
severity="medium",
)
RAG
Use rag when instability appears in retrieval plus generation.
result = client.auto().rag(
query="What changed in the policy?",
retrieved_context=[
"Policy updated last week.",
"Refund window reduced to 14 days."
],
symptoms=["retrieval_drift"],
severity="medium",
)
Step
Use step when you need stabilization for a workflow or agent step.
result = client.auto().step(
step_name="coordinator",
step_input={"task": "resolve ticket"},
symptoms=["unstable_workflow"],
severity="medium",
)
What Aegis Returns
Every call returns an AegisResult.
result = client.auto().llm(...)
Key fields
actions— interventions Aegis selectedtrace— list-based control tracemetrics— runtime signalsused_fallback— whether fallback behavior was usedexplanation— concise rationalescope— llm, rag, or stepscope_data— scope-specific runtime data
Important
Aegis is a control layer.
That means:
final_answermay be Noneoutputmay be None
Aegis does not generate the final model answer itself. It returns the control decisions and runtime shaping you apply to your own model or system.
Typical LLM Integration Pattern
result = client.auto().llm(
base_prompt="You are a helpful assistant.",
input={"user_query": "Explain black holes simply."},
symptoms=["inconsistent_outputs"],
severity="medium",
)
runtime_config = result.scope_data.get("runtime_config", {})
controlled_prompt = result.scope_data.get("controlled_prompt")
print(runtime_config)
print(controlled_prompt)
print(result.actions)
You then apply the returned controls in your own downstream model call.
Example Result Shape
{
"output": null,
"final_answer": null,
"metrics": {
"action_count": 2
},
"actions": [
{
"type": "reduce_variability",
"intensity": "medium",
"label": "Reduce output variability"
}
],
"trace": [
{
"scope": "llm",
"observation": {},
"decision": {},
"actions": [],
"fallback": {
"used_fallback": false
},
"changes": {},
"upstream": {}
}
],
"used_fallback": false,
"explanation": "Selected because it achieved the highest overall score.",
"scope": "llm",
"scope_data": {
"runtime_config": {
"temperature": 0.2,
"top_p": 0.8
},
"controlled_prompt": "You are a helpful assistant. ..."
}
}
Debugging
print(result.debug_summary())
print(result.to_dict())
Useful fields to inspect first:
print(result.actions)
print(result.explanation)
print(result.trace)
print(result.scope_data)
Configuration
from aegis import AegisConfig
config = AegisConfig(
mode="balanced",
max_interventions=3,
allow_retries=True,
allow_retrieval_expansion=True,
allow_context_reduction=True,
allow_prompt_shaping=True,
fallback="baseline",
explain=False,
emit_trace=False,
policy=None,
timeout_ms=30000,
)
Required Request Inputs
For scope calls, provide:
symptoms— required, non-empty listseverity— required, one of: low, medium, high
Example:
result = client.auto().llm(
base_prompt="You are a careful assistant.",
symptoms=["inconsistent_outputs"],
severity="medium",
)
Design Principles
- runtime control over training
- minimal intervention
- observable behavior through trace and actions
- model-agnostic integration
Documentation
Docs in /docs explain:
- architecture
- scopes
- result behavior
- integration guidance
- migration and usage patterns
Status
- Stable SDK surface
- Active scopes: llm, rag, step
- Public backend routes aligned to the scope-first contract
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scelabs_aegis-0.3.1.tar.gz.
File metadata
- Download URL: scelabs_aegis-0.3.1.tar.gz
- Upload date:
- Size: 13.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2c101f6358dcf55added68cc8ed33a5b09862aacb20063cafcb7360a1b252874
|
|
| MD5 |
20e26dae462d2d95d87acb33a6b2e44b
|
|
| BLAKE2b-256 |
2203c48ea938cb76f52521d752e2276a09dd16e36c1e94713f3dd791f418518c
|
File details
Details for the file scelabs_aegis-0.3.1-py3-none-any.whl.
File metadata
- Download URL: scelabs_aegis-0.3.1-py3-none-any.whl
- Upload date:
- Size: 12.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e906e11c7e3bfc74b81633548fd5bc5d995c0794f3cf337b8dca7255aae44d1c
|
|
| MD5 |
3cbf9d4ab86d26b06a36d5e445761a0f
|
|
| BLAKE2b-256 |
978a50846bfd69c4928008059985831edb8c572245c488e6898dd53a3f5be6cb
|