Open-source self-healing runtime for Python: RAG + OpenAI-compatible LLM + unified diff patches
Project description
self-heal
Open-source self-healing helper for Python.
It indexes your repository (AST chunks + OpenAI-compatible embeddings), captures tracebacks (in-process hook / decorator or supervised subprocess), asks an OpenAI-compatible chat model for a unified diff, validates paths, optionally applies patches, and records an audit trail.
Highlights
- Python:
>= 3.11 - Model protocol: OpenAI-compatible HTTP APIs (
base_url, chat completions, embeddings) - Safety first: no
execof model output, unified diffs only, allow/deny path checks, masked secrets in locals, API keys from env only
Contents
Install
pip install self-heal-runtime
# Optional: notifications integrations (Sentry SDK extra)
pip install "self-heal-runtime[notifications]"
Set OPENAI_API_KEY (or the env name from [llm].api_key_env in .self-heal.toml).
Quickstart
-
Initialize config in your project root:
self-heal init -
Index code (requires an embeddings endpoint):
self-heal index -
Run in supervised mode (captures stderr tracebacks and can propose/heal):
PYTHONPATH=. self-heal run -- python -m examples.broken_app.main
Auto-apply variant (still validates paths; use with care):
PYTHONPATH=. self-heal run --auto --no-dry-run -- python -m examples.broken_app.main
-
Run offline heal from a traceback file:
python -m examples.broken_app.main 2> tb.txt # or copy a traceback self-heal heal --tb tb.txt
-
Use as a library (in-process):
from pathlib import Path from self_heal import install, self_heal install(project_root=Path(__file__).resolve().parents[1]) @self_heal(mode="suggest") def risky(): ...
Configuration
See .self-heal.example.toml or run self-heal init (template is shipped in self_heal/templates/).
Key sections: [llm], [index], [heal], [supervisor], [notifications].
Providers
The [llm].provider field selects how chat completions are issued. Embeddings always use an OpenAI-compatible endpoint (so OpenAI / HF / Ollama work out of the box for indexing).
-
OpenAI (default):
[llm] provider = "openai" base_url = "https://api.openai.com/v1" model = "gpt-4o-mini" embedding_model = "text-embedding-3-small" api_key_env = "OPENAI_API_KEY"
-
Anthropic — native Messages API (
POST /v1/messageswithx-api-keyandanthropic-versionheaders):[llm] provider = "anthropic" base_url = "https://api.anthropic.com/v1" model = "claude-3-5-sonnet-20241022" api_key_env = "ANTHROPIC_API_KEY"
Anthropic does not provide an embeddings API.
self-heal indexis rejected withprovider = "anthropic". To use RAG, run indexing with another provider (e.g. swap toopenai/huggingface/ollamaforself-heal index, then switch back toanthropicfor healing). -
Hugging Face — Inference Providers router (OpenAI-compatible):
[llm] provider = "huggingface" base_url = "https://router.huggingface.co/v1" model = "meta-llama/Llama-3.1-8B-Instruct" embedding_model = "intfloat/multilingual-e5-large" api_key_env = "HF_TOKEN"
-
Ollama — local OpenAI-compatible endpoint (no auth):
[llm] provider = "ollama" base_url = "http://localhost:11434/v1" model = "llama3.1" embedding_model = "nomic-embed-text" api_key_env = "OLLAMA_API_KEY" # ignored by Ollama
Notifications
Optional outbound reports when an error is captured and when a heal is proposed or applied. Enable [notifications].enabled = true, then turn on individual channels under [notifications.telegram], [notifications.slack], [notifications.webhook], or [notifications.sentry]. Secrets stay in environment variables (e.g. TELEGRAM_BOT_TOKEN, SLACK_WEBHOOK_URL, SELF_HEAL_WEBHOOK_URL, SENTRY_DSN).
- Payload: by default, diffs are omitted and tracebacks are truncated (
include_diff,include_traceback,max_traceback_lines) - Webhook: JSON POST with optional HMAC (
X-SelfHeal-Signature: sha256=…,X-SelfHeal-Timestamp); onlyhttpstargets are allowed unlessallow_insecure = true - Sentry: install extras with
pip install 'self-heal-runtime[notifications]'(pulls insentry-sdk)
Delivery is asynchronous; successes and failures are also written to .self-heal/audit.jsonl as notification_sent / notification_failed.
Security
- Model output is never executed as Python; only unified diffs are accepted.
- Patches are checked against
allowed_paths/forbidden_paths; defaults block.git/,.env*, secret globs,pyproject.toml. - Locals and tracebacks are sanitized before they are sent to the model.
- Prefer
dry-run(CLI default) until you trust the workflow; use--no-dry-runwith--autoonly when appropriate.
Limitations (MVP)
- Patch application prefers
git applyinside a git repo; otherwise it tries systempatch, then a small Python hunk applier. - Supervised mode expects Python-style tracebacks on stderr.
- Embedding dimension is assumed compatible with Chroma stored vectors (default client setup targets OpenAI
text-embedding-3-small-sized vectors).
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file self_heal_runtime-0.3.0.tar.gz.
File metadata
- Download URL: self_heal_runtime-0.3.0.tar.gz
- Upload date:
- Size: 35.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f50bca9779995daad9f69ee1879062f59b3cbe40ebe481b3f874786286ac8087
|
|
| MD5 |
5cea899e82d62c67ff2b82085ba2c5a8
|
|
| BLAKE2b-256 |
228ef590b15be892542f9510b875e2ca47e39c9ad160d4539cb715163480eba5
|
File details
Details for the file self_heal_runtime-0.3.0-py3-none-any.whl.
File metadata
- Download URL: self_heal_runtime-0.3.0-py3-none-any.whl
- Upload date:
- Size: 47.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7e2c7e558dcfd4c6d6dc7f360be27f46b5737c1761f186ecdb13ff9e41602955
|
|
| MD5 |
638f90f711ffb0ae934aa44f1acfc387
|
|
| BLAKE2b-256 |
86e94482ad0d20ba4d62f5baa7cdb85a1050a36e4c4a0899abe6ec5fd872bf0d
|