AI-Powered Infrastructure Copilot: The Self-Healing SRE.
Project description
ResponseIQ
"Your 3am alert just fixed itself and opened a PR before you woke up."
ResponseIQ is an AI-Native Self-Healing Infrastructure Copilot. Unlike traditional parsers that match regex strings, ResponseIQ reads your application logs, loads your actual source code into an LLM context, and generates surgical, context-aware remediation patches for incidents — with a full audit trail your post-mortem author can paste directly into the incident report.
Try it right now (zero config required):
pip install responseiq && responseiq demo
Or open an instant playground in your browser →
📸 See It In Action
🎬 Animated terminal demo: Run
vhs demo.tape(VHS required) to regeneratedemo.gif.
Real demo — no mocks. The output below was captured live against a real bug injected into the httpie/cli open-source repo, analysed entirely by a local Ollama llama3.2 model. No API key, no cloud, no staging environment.
Step 1 — The crash (http --debug --timeout 30 GET http://httpbin.org/get)
Traceback (most recent call last):
File ".venv/bin/http", line 10, in <module>
sys.exit(main())
File "httpie/core.py", line 140, in raw_main
exit_status = main_program(
File "httpie/core.py", line 213, in program
for message in messages:
File "httpie/client.py", line 66, in collect_messages
send_kwargs = make_send_kwargs(args)
File "httpie/client.py", line 283, in make_send_kwargs
timeout = args.timeout['connect'] if args.timeout else None
~~~~~~~~~~~~^^^^^^^^^^^
TypeError: 'float' object is not subscriptable
Step 2 — Scan (--mode scan)
$ responseiq --mode scan --target ./httpie_crash.log
------------------------------------------------------------
ResponseIQ Scan Report
Target : httpie_crash.log
Status : SUCCESS
------------------------------------------------------------
Scanned : 25 message(s)
Incidents: 25 found
------------------------------------------------------------
1. [HIGH] Float Object Not Subscriptable Error
Source : ai
Description: The log message indicates a TypeError with a float object being
treated as subscriptable. This suggests an issue with data type
conversion or manipulation in the code.
2. [HIGH] Error in Python Script
Source : ai
Description: The log indicates a traceback which suggests an error occurred in
a Python script. Further investigation is required to determine
the root cause.
3. [CRITICAL] Critical: Unhandled Exception in Script Execution
Source : ai
Description: The script is attempting to exit with a non-zero status code
without proper error handling. This could lead to unexpected
behavior or crashes.
4. [CRITICAL] HTTPie Core Crash
Source : ai
Description: A crash occurred in the HTTPie core, referencing line 162 of
httpie/core.py. The stack frame indicates a function call to
raw_main with an invalid parser.
------------------------------------------------------------
Tip: run with --mode fix to apply safe remediations.
------------------------------------------------------------
Step 3 — Fix (--mode fix)
$ responseiq --mode fix --target ./httpie_crash.log
------------------------------------------------------------
ResponseIQ Fix Report
Target : httpie_crash.log
Status : SUCCESS
------------------------------------------------------------
Scanned : 25 message(s)
Fixes : 3 remediation(s) generated
------------------------------------------------------------
1. [CRITICAL] HTTP Server Crash
Allowed : YES
Confidence : 60%
Impact Score : 79.2/100
Blast Radius : single_service
Execution Mode : guarded_apply
Rationale : AI-generated remediation based on incident analysis
Remediation Plan: Check the http module for any recent changes and ensure
it is properly configured. If necessary, revert to a
previous working version.
Rollback Plan : No file changes detected - no rollback required
Test Plan : Run existing test suite; verify --timeout flag behaviour
with float and dict inputs.
Checks Passed : tests, security_scan, syntax_check
Next Step : Remediation approved for automatic execution
Next Step : Monitor system health during application
Next Step : Verify resolution using test plan
2. [CRITICAL] System Exit Due to Main Function Failure
Allowed : YES
Confidence : 60%
Impact Score : 79.2/100
Blast Radius : single_service
Execution Mode : guarded_apply
Remediation Plan: Review main() error propagation and ensure TypeError is
caught and reported with file/line context.
Checks Passed : tests, security_scan, syntax_check
3. [CRITICAL] HTTPie Crash with Invalid URL
Allowed : YES
Confidence : 60%
Impact Score : 79.2/100
Blast Radius : single_service
Execution Mode : guarded_apply
Remediation Plan: Validate __main__.py entry point — ensure exceptions
surfaced from collect_messages propagate correctly.
Checks Passed : tests, security_scan, syntax_check
------------------------------------------------------------
Trust Gate: set RESPONSEIQ_POLICY_MODE=apply to execute changes.
------------------------------------------------------------
What happened behind the scenes
| Stage | Detail |
|---|---|
| Noise filter | Stripped 42 verbose debug lines (version headers, env repr blocks) → 25 signal lines |
| Concurrent scan | All 25 lines analysed in parallel via asyncio.gather() — single event loop |
| Triage | 3 CRITICAL incidents selected out of 25 for full remediation pipeline |
| P2 Reproduction tests | Auto-generated pytest scripts for each incident |
| Negative Proof | Executed test scripts to confirm failure before fix |
| P3 Git Correlation | Searched commit history for suspect changes |
| P4 Guardrails | 7 rules checked: no bare except, no secrets, no print statements, etc. |
| Trust Gate | All 3 remediations → APPROVED / guarded_apply |
| P5 Integrity Gate | Evidence sealed with SHA-256 chain for SOC2 audit trail |
| P6 Causal Graph | Root-cause dependency graph built for each incident |
✨ Key Features
- 🧠 AI-Native Analysis: Uses Generic AI reasoning instead of fragile regex parsing rules.
- 👁️ Context-Aware: Reads the local source files referenced in logs to understand why the crash happened.
- ⚡ Self-Healing: Can generate Pull Requests or apply patches directly (CLI mode).
- 🛡️ Battle-Tested: Includes "Sandbox Mode" to safely test remediation logic.
🏗️ Architecture
flowchart TD
A([📄 Log Input]) --> B[Noise Filter]
B --> C[⚡ Concurrent Scan\nasyncio.gather]
C --> D{🤖 AI Classifier}
D -->|HIGH / CRITICAL| E[🌲 Context Extractor\nTree-sitter AST]
D -->|LOW / INFO| H
E --> F[🧠 LLM Reasoning\nOllama · OpenAI]
F --> G{🛡️ Trust Gate\n7 guardrails}
G -->|✅ Approved| H[📦 ProofBundle\nSHA-256 sealed]
G -->|🚫 Blocked| I([👤 Human Review])
H --> J[🐙 GitHub PR\ngithubkit]
J --> K[🤖 PR Bot\n/responseiq approve]
⚡ Try it in 60 seconds (no API key needed)
A broken service and a pre-recorded crash log are included in the repo. One command, zero config:
pip install responseiq
git clone https://github.com/infoyouth/responseiq.git && cd responseiq
# Full demo — scan + fix + REASONING.md audit log
./samples/demo.sh --explain
The demo script:
- Shows the 3 real injected bugs in
samples/buggy_service.py - Runs
--mode scanand prints the incident report - Runs
--mode fixwith Trust Gate evaluation - Writes a
REASONING.mdaudit log explaining every decision - Needs no LLM key (rule-engine fallback is active by default)
Want more control? The demo script accepts flags:
./samples/demo.sh # scan only
./samples/demo.sh --fix # scan + fix
./samples/demo.sh --explain # scan + fix + REASONING.md audit log
Expected scan output:
------------------------------------------------------------
ResponseIQ Scan Report
Target : samples/crash.log
Status : SUCCESS
------------------------------------------------------------
Scanned : 3 message(s)
Incidents: 3 found
------------------------------------------------------------
1. [HIGH] KeyError: 'email' in process_user_request
2. [CRITICAL] Memory leak — _request_log unbounded growth
3. [HIGH] ZeroDivisionError: division by zero (reset race)
------------------------------------------------------------
Tip: run with --mode fix to apply safe remediations.
------------------------------------------------------------
See samples/README.md for full details on the embedded bugs and how to reproduce them.
🚀 Quick Start (CLI Tool)
For developers who want to fix bugs in their local environment or CI pipeline.
0. One-liner (see it work immediately)
pip install responseiq && responseiq demo
No config, no API key, no database. responseiq demo runs a live scan + fix cycle against a synthetic incident and shows you a REASONING.md audit trail — all in ~10 seconds.
1. Install
pip install responseiq
2. Configure (30-second wizard)
responseiq init
The wizard asks three questions:
- LLM provider — Ollama (local, free), OpenAI, or none (rule-engine fallback)
- Trust policy —
suggest_only,pr_only, orguarded_apply - GitHub token — optional, for PR bot mode
It writes a .env file and runs a smoke test. Done.
Prefer manual config? Set env vars directly:
# Ollama (free, fully local — recommended)
echo "LLM_BASE_URL=http://localhost:11434/v1" >> .env
echo "LLM_ANALYSIS_MODEL=llama3.2" >> .env
# OpenAI
echo "OPENAI_API_KEY=sk-..." >> .env
# No config — rule-engine fallback, always available
3. Scan Your Logs
# Included sample — fastest path, no setup needed
responseiq --mode scan --target ./samples/crash.log
# Single file (JSON, .log, or .txt)
responseiq --mode scan --target ./logs/error.log
# Whole directory
responseiq --mode scan --target ./var/log/app/
📥 Zero-Config JSON / NDJSON Pipe (no OTel collector needed)
If you don’t have a live OTel collector, pipe logs directly from any source using --target -:
# Plain text lines
cat ./logs/error.log | responseiq --mode scan --target -
# NDJSON (one JSON object per line — Docker, Kubernetes, structured logging)
docker logs my-container 2>&1 | responseiq --mode scan --target -
kubectl logs -l app=api | responseiq --mode fix --target - --explain
# JSON array (e.g. from a log aggregator export)
curl -s 'https://logstore/export?format=json' | responseiq --mode scan --target -
# Datadog / OpenTelemetry structured events
echo '{"level":"ERROR","message":"KeyError: email","service":"api"}' \
| responseiq --mode scan --target -
All three wire formats are auto-detected: NDJSON, JSON array, and plain text.
Example output:
------------------------------------------------------------
ResponseIQ Scan Report
Target : logs/error.log
Status : SUCCESS
------------------------------------------------------------
Scanned : 1 message(s)
Incidents: 1 found
------------------------------------------------------------
1. [CRITICAL] Out of Memory Error
Source : ai
Description: The system is experiencing a critical error due to an out of
memory condition caused by a resource leak or excessive allocation.
------------------------------------------------------------
Tip: run with --mode fix to apply safe remediations.
------------------------------------------------------------
4. Fix with Explainability
Add --explain to any --mode fix run to produce a REASONING.md audit log:
responseiq --mode fix --target ./samples/crash.log --explain
REASONING.md contains the full decision trace for every incident:
- Why the LLM chose this fix
- Which AST nodes were loaded (Tree-sitter context)
- What the Trust Gate decided and why
- The causal graph JSON
- Rollback plan
- Suspect commit (Git correlation)
Commit REASONING.md alongside the patch for SOC2 / post-incident review.
5. Shadow Mode — Autonomous Triage (Zero Risk, Zero Config)
New to AI-driven remediation? Start here. Shadow Mode is the safest entry point: ResponseIQ never touches your code or infrastructure. It triages incidents, builds the causal graph, classifies severity, and projects what it would fix. Your team gets the signal. You stay in control.
# Try it on the included samples first
responseiq --mode shadow --target ./samples/ --shadow-report
# Or point at your own logs — nothing will be changed
responseiq --mode shadow --target ./logs/ --shadow-report
What you get:
- Incident triage 5x faster than your senior on-call
- Projected MTTR savings over the past 7 days
- Executive-ready markdown report (paste into your next sprint review)
- Full causal graph per incident — no LLM hallucination can trigger a deployment
Once you trust the output, enable pr_only mode to let ResponseIQ open draft PRs — your engineers review, they merge. See docs/SECURITY_TRUST.md for the full trust model.
🏢 Platform Server (Self-Hosted)
For Platform Engineers who want a centralized incident response API (webhooks for Datadog, PagerDuty, Sentry etc.).
Prerequisites
- Docker & Docker Compose
- LLM configured via
.env(Ollama or OpenAI — see Quick Start above)
Running with Docker
# 1. Start the API and Database
docker-compose up -d
# 2. The API is now available at http://localhost:8000
curl http://localhost:8000/health
Development Setup (Local)
We use UV for lightning-fast dependency management.
# Install dependencies
uv sync
# Run the API server with hot-reload
uv run uvicorn src.app:app --reload
🧪 Benchmark: SWE-bench Verified
ResponseIQ is evaluated against SWE-bench Verified — the 500-sample human-validated subset used to rank autonomous coding agents (SWE-agent, Devin, OpenHands, etc.).
# Quick smoke run — 5 samples, no LLM key (dry-run)
uv run python scripts/swe_bench_eval.py --samples 5 --dry-run
# Full benchmark run (500 samples, real LLM)
uv run python scripts/swe_bench_eval.py --samples 500
# Filter by repo
uv run python scripts/swe_bench_eval.py --repo sympy/sympy --samples 50
Outputs:
reports/swe_bench_eval.md— per-repo pass@1 tablereports/swe_bench_eval.json— machine-readable results per instancereports/predictions.jsonl— compatible with the officialswebenchharness for gold-standard eval
The built-in heuristic pass@1 (non-empty patch + Trust Gate approved + causal symbol overlap) is a fast CI proxy. Feed
predictions.jsonlto the official harness for the Docker-based gold-standard evaluation.
🔌 Compatible With
ResponseIQ's webhook API is designed to receive alert payloads from the tools your team already uses. Point your existing alert routing at POST /api/v1/incidents/ingest — no agents or plugins required.
| Platform | How to connect |
|---|---|
| Datadog | Webhook integration → POST /api/v1/incidents/ingest |
| PagerDuty | Event Orchestration webhook → same endpoint |
| Sentry | Internal Integrations → Webhook URL |
| GitHub Actions | curl step in your CI workflow (see docs/ARCHITECTURE.md) |
| Alertmanager | Webhook receiver in alertmanager.yml |
All integrations use standard HTTP webhooks — no vendor-specific SDK required.
🧪 Development & Contributing
Workflow
- Linting:
make lint - Testing:
make test - Format:
make format
Project Structure
src/responseiq/cli.py: Entry point for the CLI tool.src/responseiq/app.py: Entry point for the API Server.src/responseiq/services/remediation_service.py: The core "Brain" that interfaces with the LLM.
License
MIT
⚠️ Disclaimer & Liability
This tool uses Generative AI to suggest infrastructure and code fixes. By using ResponseIQ, you acknowledge that:
- AI Can Hallucinate: The suggestions provided may be syntactically correct but functionally wrong or insecure.
- Human Review is Mandatory: You must strictly review all Pull Requests or patches generated by this tool before deploying them.
- No Warranty: As per the MIT License, the authors assume no liability for system outages, data loss, or security vulnerabilities resulting from the use of this software.
For security reporting instructions, please see SECURITY.md.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file responseiq-2.22.0.tar.gz.
File metadata
- Download URL: responseiq-2.22.0.tar.gz
- Upload date:
- Size: 807.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dd15ec632871393bcfb21b0dad561a65bffabd97cae04adad1fe474a93e9e0b7
|
|
| MD5 |
3dad6f7d395e246f15d1e7e978a05d09
|
|
| BLAKE2b-256 |
b61a0353ea3f2183cb04b118791f2ca69c6cd9b6e648c159e87076277bcdd1e7
|
Provenance
The following attestation bundles were made for responseiq-2.22.0.tar.gz:
Publisher:
release.yml on infoyouth/responseiq
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
responseiq-2.22.0.tar.gz -
Subject digest:
dd15ec632871393bcfb21b0dad561a65bffabd97cae04adad1fe474a93e9e0b7 - Sigstore transparency entry: 1065946826
- Sigstore integration time:
-
Permalink:
infoyouth/responseiq@fbd811c70a62d0e7322e2080da195fd2422b32ef -
Branch / Tag:
refs/heads/main - Owner: https://github.com/infoyouth
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@fbd811c70a62d0e7322e2080da195fd2422b32ef -
Trigger Event:
workflow_run
-
Statement type:
File details
Details for the file responseiq-2.22.0-py3-none-any.whl.
File metadata
- Download URL: responseiq-2.22.0-py3-none-any.whl
- Upload date:
- Size: 196.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b9e89923a10350ffe104c38cc3dd8c8f7f907783ca8cec7b329a4c264d2979b9
|
|
| MD5 |
2e7296ac0d1bdf122e26a49ee498cf25
|
|
| BLAKE2b-256 |
e404dd14a78a8c6380f2769d1ff33641eceb39a31b34930cadb6bc9ba8ea15ed
|
Provenance
The following attestation bundles were made for responseiq-2.22.0-py3-none-any.whl:
Publisher:
release.yml on infoyouth/responseiq
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
responseiq-2.22.0-py3-none-any.whl -
Subject digest:
b9e89923a10350ffe104c38cc3dd8c8f7f907783ca8cec7b329a4c264d2979b9 - Sigstore transparency entry: 1065946828
- Sigstore integration time:
-
Permalink:
infoyouth/responseiq@fbd811c70a62d0e7322e2080da195fd2422b32ef -
Branch / Tag:
refs/heads/main - Owner: https://github.com/infoyouth
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@fbd811c70a62d0e7322e2080da195fd2422b32ef -
Trigger Event:
workflow_run
-
Statement type: