AI-Powered Infrastructure Copilot: The Self-Healing SRE.
Project description
ResponseIQ
"Don't just debug. Fix."
ResponseIQ is an AI-Native Self-Healing Infrastructure Copilot. Unlike traditional parsers that match regex strings, ResponseIQ reads your application logs, loads your actual source code into an LLM context, and generates surgical, context-aware remediation patches for incidents.
📸 See It In Action
Above: ResponseIQ scanning a crash log, reading the service.py file mentioned in the stack trace, and proposing a specific code patch.
✨ Key Features
- 🧠 AI-Native Analysis: Uses Generic AI reasoning instead of fragile regex parsing rules.
- 👁️ Context-Aware: Reads the local source files referenced in logs to understand why the crash happened.
- ⚡ Self-Healing: Can generate Pull Requests or apply patches directly (CLI mode).
- 🛡️ Battle-Tested: Includes "Sandbox Mode" to safely test remediation logic.
🚀 Quick Start (CLI Tool)
For developers who want to fix bugs in their local environment or CI pipeline.
1. Install
pip install responseiq
2. Configure an LLM
Choose one option:
Option A: Ollama (free, fully local — recommended)
# Install Ollama: https://ollama.com
ollama serve &
ollama pull llama3.2
# Add to .env in your project root:
echo "LLM_BASE_URL=http://localhost:11434/v1" >> .env
echo "LLM_ANALYSIS_MODEL=llama3.2" >> .env
Option B: OpenAI
echo "OPENAI_API_KEY=sk-..." >> .env
Option C: No config (rule-engine fallback) Works out of the box with no API key — uses a local heuristic parser.
3. Scan Your Logs
# Single file (JSON or .log or .txt)
responseiq --mode scan --target ./logs/error.log
# Whole directory
responseiq --mode scan --target ./var/log/app/
Example output:
------------------------------------------------------------
ResponseIQ Scan Report
Target : logs/error.log
Status : SUCCESS
------------------------------------------------------------
Scanned : 1 message(s)
Incidents: 1 found
------------------------------------------------------------
1. [CRITICAL] Out of Memory Error
Source : ai
Description: The system is experiencing a critical error due to an out of
memory condition caused by a resource leak or excessive allocation.
------------------------------------------------------------
Tip: run with --mode fix to apply safe remediations.
------------------------------------------------------------
4. Shadow Mode (zero-risk demo)
Analyse all incidents and get a projected MTTR savings report — nothing is changed:
responseiq --mode shadow --target ./logs/ --shadow-report
🏢 Platform Server (Self-Hosted)
For Platform Engineers who want a centralized incident response API (webhooks for Datadog, PagerDuty, Sentry etc.).
Prerequisites
- Docker & Docker Compose
- LLM configured via
.env(Ollama or OpenAI — see Quick Start above)
Running with Docker
# 1. Start the API and Database
docker-compose up -d
# 2. The API is now available at http://localhost:8000
curl http://localhost:8000/health
Development Setup (Local)
We use UV for lightning-fast dependency management.
# Install dependencies
uv sync
# Run the API server with hot-reload
uv run uvicorn src.app:app --reload
🧪 Development & Contributing
Workflow
- Linting:
make lint - Testing:
make test - Format:
make format
Project Structure
src/cli.py: Entry point for the CLI tool.src/app.py: Entry point for the API Server.src/services/remediation_service.py: The core "Brain" that interfaces with the LLM.
License
MIT
⚠️ Disclaimer & Liability
This tool uses Generative AI to suggest infrastructure and code fixes. By using ResponseIQ, you acknowledge that:
- AI Can Hallucinate: The suggestions provided may be syntactically correct but functionally wrong or insecure.
- Human Review is Mandatory: You must strictly review all Pull Requests or patches generated by this tool before deploying them.
- No Warranty: As per the MIT License, the authors assume no liability for system outages, data loss, or security vulnerabilities resulting from the use of this software.
For security reporting instructions, please see SECURITY.md.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file responseiq-2.12.0.tar.gz.
File metadata
- Download URL: responseiq-2.12.0.tar.gz
- Upload date:
- Size: 108.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
925fd2f702a70afdd68716c5ea8da916be4a47ae4e75e1ca0f21d6f4b9564d54
|
|
| MD5 |
ec6ebe8a0c418dda5ad8c2f6e49a7d63
|
|
| BLAKE2b-256 |
842287430d47e6f9db651ebe7889d633733f0d20fb97e433db6ccde0131899e1
|
File details
Details for the file responseiq-2.12.0-py3-none-any.whl.
File metadata
- Download URL: responseiq-2.12.0-py3-none-any.whl
- Upload date:
- Size: 134.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44bc1eb54845c7f2ad03ad0c93d01841100262ed22707fc671a92d498fe89a5b
|
|
| MD5 |
bb5c5cb18a12332cb4a2238ac8f4466c
|
|
| BLAKE2b-256 |
1b2c014af49ebe7106307b519d36fe396ef5fb530b0b0d200453c6743562c482
|