Automatic repair for failing Python code, powered by any LLM.
Project description
self-heal
Automatic repair for failing Python code, powered by any LLM.
When a function fails, self-heal catches the exception, analyzes it with an LLM, proposes a repaired version, and retries. One decorator. Works with Claude, OpenAI, Gemini, and 100+ other providers.
from self_heal import repair
@repair(max_attempts=3)
def extract_price(text: str) -> float:
# Naive: only handles "$X.YY"
return float(text.replace("$", ""))
print(extract_price("$12.99")) # 12.99
print(extract_price("₹1,299")) # 1299.0 (repaired)
print(extract_price("€5,49")) # 5.49 (repaired)
Install
self-heal ships with a Protocol and several optional adapters. Install the adapter(s) you want:
pip install 'self-heal-llm[claude]' # Anthropic Claude (default)
pip install 'self-heal-llm[openai]' # OpenAI + OpenAI-compatible endpoints
pip install 'self-heal-llm[gemini]' # Google Gemini
pip install 'self-heal-llm[litellm]' # 100+ providers via LiteLLM
pip install 'self-heal-llm[all]' # everything
PyPI distribution name is
self-heal-llm(the short nameself-healwas blocked by PyPI's similarity check with an unrelated package). The Python import staysfrom self_heal import ....
Provider support
| Adapter | Covers |
|---|---|
ClaudeProposer |
Anthropic Claude (native SDK) |
OpenAIProposer |
OpenAI + any OpenAI-compatible endpoint (OpenRouter, Together, Groq, Fireworks, Anyscale, Perplexity, xAI, DeepSeek, Azure, Ollama, LM Studio, vLLM, llama.cpp server, ...) |
GeminiProposer |
Google Gemini (native SDK) |
LiteLLMProposer |
100+ providers via LiteLLM (Bedrock, Vertex, Cohere, Mistral, ...) |
Using different providers
Claude (default):
from self_heal import repair
@repair() # uses ClaudeProposer under the hood
def my_fn(...): ...
OpenAI:
from self_heal import repair
from self_heal.llm import OpenAIProposer
@repair(proposer=OpenAIProposer(model="gpt-5"))
def my_fn(...): ...
Gemini:
from self_heal import repair
from self_heal.llm import GeminiProposer
@repair(proposer=GeminiProposer(model="gemini-2.5-pro"))
def my_fn(...): ...
Any OpenAI-compatible endpoint (OpenRouter, Groq, Ollama, ...):
from self_heal.llm import OpenAIProposer
# OpenRouter — hundreds of models through one key
proposer = OpenAIProposer(
model="google/gemini-2.5-pro",
base_url="https://openrouter.ai/api/v1",
)
# Groq — fast inference
proposer = OpenAIProposer(
model="llama-3.3-70b-versatile",
base_url="https://api.groq.com/openai/v1",
)
# Local Ollama
proposer = OpenAIProposer(
model="llama3.3",
base_url="http://localhost:11434/v1",
api_key="ollama",
)
LiteLLM catch-all (100+ providers):
from self_heal.llm import LiteLLMProposer
proposer = LiteLLMProposer(model="anthropic/claude-sonnet-4-6")
proposer = LiteLLMProposer(model="bedrock/anthropic.claude-3-5-sonnet")
proposer = LiteLLMProposer(model="vertex_ai/gemini-2.5-pro")
proposer = LiteLLMProposer(model="cohere/command-r-plus")
Why this exists
AI coding agents fail on a lot of real tasks. The industry's current answer is "retry and hope." That's not a strategy.
self-heal treats repair as a first-class primitive: diagnose the failure, propose a targeted fix, verify, retry. A thin library you can wrap around any Python function or agent tool.
Built on ongoing code-repair research (RepairBench, NeurIPS 2026).
How it works
- Catch the exception and capture inputs, traceback, and failure type.
- Classify the failure (validation, exception, assertion).
- Propose a repaired function via LLM with a failure-aware prompt.
- Recompile the proposed function into the running process.
- Retry with the same inputs.
All within a single decorator boundary.
API
Decorator:
from self_heal import repair
@repair(max_attempts=3, model="claude-sonnet-4-6", verbose=True)
def my_tool(x): ...
my_tool(42)
my_tool.last_repair # -> RepairResult with full attempt history
Loop (for advanced use):
from self_heal import RepairLoop
loop = RepairLoop(max_attempts=5, verbose=True)
result = loop.run(my_tool, args=(42,))
if result.succeeded:
print(result.final_value)
else:
print(result.attempts[-1].failure.traceback)
Custom proposer:
from self_heal.llm import LLMProposer
class MyProposer:
def propose(self, system: str, user: str) -> str:
# ... your logic ...
return "def my_tool(x): return x * 2"
Safety
self-heal executes LLM-generated code via exec() in the same process. Same trust boundary as any LLM-in-the-loop system: do not run against untrusted inputs without a sandbox. Sandboxed execution is on the roadmap.
Roadmap
- v0.0.1: core repair loop + decorator + Claude backend
- v0.0.2: OpenAI, Gemini, LiteLLM adapters — works with any LLM
- v0.1: user-provided verifiers (beyond exception-catching)
- v0.2: telemetry + before/after success metrics
- v0.3: async support
- v0.4: sandboxed execution
- v0.5: repair persistence (learn from past fixes)
- v1.0: NeurIPS 2026 paper co-release
Development
git clone https://github.com/Johin2/self-heal.git
cd self-heal
python -m venv .venv
.venv/Scripts/pip install -e ".[dev]" # Windows
# .venv/bin/pip install -e ".[dev]" # macOS/Linux
pytest
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file self_heal_llm-0.0.2.tar.gz.
File metadata
- Download URL: self_heal_llm-0.0.2.tar.gz
- Upload date:
- Size: 14.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
55e53dbe701ec57b7122a9a37eee217cd1eed5c16fe2addd0d77035e2b2e33a1
|
|
| MD5 |
8f85ad3157f4dd3ec5ef7a2bebd5d935
|
|
| BLAKE2b-256 |
b28dd77e4b0381d30f707d2beb096602bfa66fa7f99f9e47edae555d531b9fd0
|
Provenance
The following attestation bundles were made for self_heal_llm-0.0.2.tar.gz:
Publisher:
publish.yml on Johin2/self-heal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
self_heal_llm-0.0.2.tar.gz -
Subject digest:
55e53dbe701ec57b7122a9a37eee217cd1eed5c16fe2addd0d77035e2b2e33a1 - Sigstore transparency entry: 1331374156
- Sigstore integration time:
-
Permalink:
Johin2/self-heal@0ce9a59a54110d23f7fc298360edce4b210622a4 -
Branch / Tag:
refs/tags/v0.0.2 - Owner: https://github.com/Johin2
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@0ce9a59a54110d23f7fc298360edce4b210622a4 -
Trigger Event:
release
-
Statement type:
File details
Details for the file self_heal_llm-0.0.2-py3-none-any.whl.
File metadata
- Download URL: self_heal_llm-0.0.2-py3-none-any.whl
- Upload date:
- Size: 15.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b5b546e8e4e7f42a5f96261ba80150d9bb10782d747d26a68b950743b7228b5
|
|
| MD5 |
bfc4c1857bda48167db0df1f3c9d3f77
|
|
| BLAKE2b-256 |
795944ceaed9560368439b151e3bde56158260fd237d695c7b7646bdc6370ff2
|
Provenance
The following attestation bundles were made for self_heal_llm-0.0.2-py3-none-any.whl:
Publisher:
publish.yml on Johin2/self-heal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
self_heal_llm-0.0.2-py3-none-any.whl -
Subject digest:
3b5b546e8e4e7f42a5f96261ba80150d9bb10782d747d26a68b950743b7228b5 - Sigstore transparency entry: 1331374298
- Sigstore integration time:
-
Permalink:
Johin2/self-heal@0ce9a59a54110d23f7fc298360edce4b210622a4 -
Branch / Tag:
refs/tags/v0.0.2 - Owner: https://github.com/Johin2
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@0ce9a59a54110d23f7fc298360edce4b210622a4 -
Trigger Event:
release
-
Statement type: