InfraRely — Reliable Agent Infrastructure. Production-grade agent framework with zero boilerplate.
Project description
InfraRely
Reliable Agent Infrastructure — Production-grade AI agent framework with zero boilerplate.
Why InfraRely?
Most AI agents today are unreliable when they move from demos to production.
Common Problems
- Non-deterministic execution — behavior changes between runs, making debugging and incident response difficult
- Hallucination (tool + response) — models invent tool names, parameters, outputs, or unsupported claims
- Poor observability — limited traces make it hard to explain failures and regressions
- Fragile multi-agent coordination — delegation and message passing break under real workload pressure
- Weak trust and accountability — decisions are hard to audit, attribute, and defend in production
- Identity breakdown — unclear agent identity/permissions lead to unsafe cross-agent actions
- Memory problems — stale, conflicting, or ungrounded memory corrupts downstream decisions
InfraRely addresses these failures with infrastructure-first primitives:
- Deterministic execution contracts — router-first control flow with frozen plans and explicit fallbacks
- Capability graphs — dependency-aware workflows that compile and execute predictably
- Verification layers — structural, logical, knowledge, and policy checks on every result
- Multi-agent runtime — scheduler, message bus, shared memory, isolation, and deadlock-aware coordination
- Identity and memory controls — runtime identity/permissions plus scoped memory discipline for safer coordination
The result is an AI agent framework designed for reliability, auditability, and safe production deployment.
Quick Start
import infrarely
infrarely.configure(llm_provider="openai", api_key="sk-...")
agent = infrarely.agent("helper")
result = agent.run("What is 2+2?")
print(result.output) # 4 (no LLM call — deterministic math)
Install
pip install infrarely
# With LLM provider extras:
pip install infrarely[openai]
pip install infrarely[anthropic]
pip install infrarely[all-providers]
Features
Core Framework
- 3-line start —
import infrarely→agent()→run() - Errors-as-data —
Resultobjects with.error, never bare exceptions - LLM-as-last-resort — Knowledge → Math → Tools → Capabilities → LLM
- Observable by default — traces, metrics, health checks on every agent
7-Layer Architecture
| Layer | Name | Description |
|---|---|---|
| 1 | Execution Contracts | Deterministic routing, frozen execution plans, three-gate LLM isolation |
| 2 | Capability Graphs | Multi-step workflows with dependency resolution |
| 3 | Infrastructure | Execution depth guard, permissions, tool validation, sandboxing |
| 4 | Verification | Structural/logical/knowledge/policy checks on every result |
| 5 | Adaptive Intelligence | Self-optimizing routing, failure analysis, token optimization |
| 6 | Multi-Agent Runtime | OS-like kernel — scheduler, IPC, shared memory, RBAC, deadlock detection |
| 7 | Autonomous Evolution | Performance analysis, A/B testing, architecture proposals with policy guards |
Tools & Knowledge
@infrarely.tool
def weather(city: str) -> str:
return f"Sunny in {city}"
agent = infrarely.agent("bot", tools=[weather])
result = agent.run("Weather in NYC?")
agent = infrarely.agent("tutor")
agent.knowledge.add_documents("./notes/")
result = agent.run("Explain photosynthesis")
# LLM bypassed if knowledge confidence >= 85%
Multi-Agent
researcher = infrarely.agent("researcher")
writer = infrarely.agent("writer")
facts = researcher.run("Find facts about Mars")
article = writer.run("Write article", context=facts)
Workflows (DAG)
wf = infrarely.workflow("pipeline", steps=[
infrarely.step("fetch", fetch_data),
infrarely.step("process", process, depends_on=["fetch"]),
infrarely.step("report", generate_report, depends_on=["process"]),
])
results = wf.execute()
Streaming
for chunk in agent.stream("Write a poem"):
print(chunk.text, end="", flush=True)
Security
- Prompt injection defense (7 injection types)
- Input sanitization (always-on)
- API key rotation
- Tool execution sandboxing
- Compliance audit logging
Human-in-the-Loop
agent.require_approval_for("send_email", auto_approve_after=300)
result = agent.run("Send welcome email")
# Pauses for human approval
CLI
infrarely run "What is 2+2?"
infrarely health
infrarely metrics
infrarely deploy
infrarely verify
InfraRely Architecture
Applications
│
│
AI Agents Layer
(Custom Agents Built by Developers)
│
│
InfraRely Agent Control Plane
┌─────────────────────────────────────────┐
│ │
│ Agent Pipeline │
│ • Planning Engine │
│ • Capability Graph │
│ • Tool Router │
│ • Verification Layer │
│ │
│ Platform Services │
│ • Memory System │
│ • Knowledge Engine │
│ • Workflow DAG Engine │
│ • Capability Registry │
│ │
│ Reliability Systems │
│ • Retry & Circuit Breakers │
│ • Token Optimization │
│ • Failure Recovery │
│ • Self-Healing Execution │
│ │
│ Observability │
│ • Execution Traces │
│ • Metrics & Telemetry │
│ • Token Budget Monitoring │
│ │
│ Security │
│ • Input Sanitization │
│ • Tool Sandbox │
│ • Permission Policies │
│ • Compliance Logging │
└─────────────────────────────────────────┘
│
│
InfraRely Runtime
(Scheduling, Isolation, State, Scaling)
│
│
External Systems / APIs
Databases • SaaS APIs • Filesystems • LLM Providers
Architecture
InfraRely is structured as a layered Agent Operating System.
-
Applications
- Developer-built AI applications.
-
Agents
- Logical workers that execute tasks and coordinate tools.
-
InfraRely Control Plane
- Planning, routing, verification, and reliability systems.
-
Runtime
- Execution environment responsible for scheduling, isolation, and scalability.
-
External Systems
- APIs, databases, and LLM providers used by agents.
Project Structure
infrarely/
├── core/ # Agent, Result, Config, Events, Decorators, Streaming
├── runtime/ # Workflow DAG, async runner, sandbox, scaling, multi-agent kernel
├── router/ # Rule-based intent classification, tool routing
├── agent/ # Execution pipeline, state machine, planning, verification
├── memory/ # Agent memory, knowledge engine, working/structured/long-term
├── security/ # Prompt injection defense, compliance, input sanitization
├── observability/ # Metrics, traces, logging, dashboard
├── optimization/ # Self-optimizing routing, failure analysis, token optimization
├── learning/ # A/B testing, architecture proposals, policy guards
├── platform/ # HITL, evaluation, versioning, marketplace, multitenancy, ACP
├── tools/ # Tool base classes, registry
├── capabilities/ # Multi-step capability definitions
├── integrations/ # GitHub, Gmail, Slack, Postgres, Notion, Webhooks, REST
├── internal/ # Execution engine bridges (private)
└── cli/ # CLI interface
LLM Providers
| Provider | Model | Setup |
|---|---|---|
| OpenAI | gpt-4o, gpt-4o-mini | infrarely.configure(llm_provider="openai", api_key="sk-...") |
| Anthropic | claude-sonnet-4-20250514 | infrarely.configure(llm_provider="anthropic", api_key="...") |
| Groq | llama-3.1-8b-instant | infrarely.configure(llm_provider="groq", api_key="...") |
| Google Gemini | gemini-1.5-flash | infrarely.configure(llm_provider="gemini", api_key="...") |
| Ollama | llama3.2 (local) | infrarely.configure(llm_provider="ollama") |
Configuration
infrarely.configure(
llm_provider="openai",
api_key="sk-...",
llm_model="gpt-4o",
knowledge_threshold=0.85,
token_budget=10_000,
log_level="INFO",
max_agents=50,
)
Or via environment variables:
export INFRARELY_LLM_PROVIDER=openai
export INFRARELY_API_KEY=sk-...
Documentation
- Quickstart
- Core Concepts
- Architecture
- API Reference
- Vision
- Runtime
- Security Model
- Multi-Agent Runtime
- Verification
- Observability
License
MIT License — see LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file infrarely-0.1.0.tar.gz.
File metadata
- Download URL: infrarely-0.1.0.tar.gz
- Upload date:
- Size: 393.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
49f492e892747855d0817af4c82cc4d6c200f939c580227d193e0af913d5ec8a
|
|
| MD5 |
0984d95dbe63cd908fb0569a84a7a8e2
|
|
| BLAKE2b-256 |
601a965f9f11d5d4c1cb0d4bd1b2ffe5aae42ae07ad6b4c7c77e05f28a20d2f4
|
File details
Details for the file infrarely-0.1.0-py3-none-any.whl.
File metadata
- Download URL: infrarely-0.1.0-py3-none-any.whl
- Upload date:
- Size: 456.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ecb02778dc7e53774d9b3eb1b21c0f0a79003572b305fe5b20bf76be1eab0b77
|
|
| MD5 |
28f00a468445659db2479f0f5e37ae5e
|
|
| BLAKE2b-256 |
73ef567fdb99ac6bcca9786a8a4898f024fb7426061c303fc8ef6df4ab908885
|