Self-healing scripts: run any script and let a local LLM fix it when it crashes.
Project description
wolverine-x
Run your scripts. When they crash, an LLM fixes them. Repeat until green.
No API key required. Runs 100% locally via Ollama.
What it does
$ wolverine examples/buggy_script.py "subtract" 20 3
Wolverine activated.
Script : examples/buggy_script.py
Model(s) : ollama/qwen2.5-coder:7b -> ollama/qwen2.5-coder:14b
Max iter : 10
[iter 1/10] model: ollama/qwen2.5-coder:7b
Script crashed. Sending to LLM...
- The function 'subtract_numbers' is referenced but never defined. Adding it.
Diff:
+ def subtract_numbers(a, b):
+ return a - b
Changes applied. Rerunning...
[iter 2/10] model: ollama/qwen2.5-coder:7b
Script crashed. Sending to LLM...
- Variable 'res' used but never defined. Should be 'result'.
Diff:
- return res
+ return result
Changes applied. Rerunning...
[iter 3/10] model: ollama/qwen2.5-coder:7b
Script ran successfully.
Output: 17
Session: 3499 tokens | $0.00 total cost
Why wolverine-x?
wolverine-x is a fork/revival of the original biobootloader/wolverine (5.1k stars, deprecated 2023). The original was broken by OpenAI SDK v1.x and only worked with GPT-4.
This version:
- Works out of the box with local Ollama models (free, private, offline)
- Supports 6 languages: Python, JavaScript, TypeScript, Bash, Ruby, Go
- Is smart about cost: tries cheaper/faster models first, escalates on failure
- Doesn't loop forever: stall detection, configurable max iterations, backup restore
vs other tools
| wolverine-x | Aider | Cursor | GitHub Copilot CLI | |
|---|---|---|---|---|
| Autonomous fix loop | Yes | No (human-in-loop) | No (IDE) | No (suggests only) |
| Local LLM (free) | Yes | Yes | No | No |
| Works in CI | Yes (--ci) |
No | No | No |
| No API key needed | Yes | Requires key | Requires key | Requires key |
| Use case | Fix crashes | Write features | Write features | Suggest commands |
Install
# Recommended: via pip
pip install wolverine-x
# Or from source
git clone https://github.com/Mrawdian/wolverine
cd wolverine
pip install -e .
Prerequisite for local models: install Ollama and pull a model:
ollama pull qwen2.5-coder:14b # best quality (~9GB)
ollama pull qwen2.5-coder:7b # faster, lighter (~5GB)
Usage
# Basic — fix a Python script using the local default model
wolverine buggy_script.py
# Pass arguments to your script
wolverine buggy_script.py arg1 arg2
# Use a specific model
wolverine --model=ollama/qwen2.5-coder:7b buggy_script.py
# Cascade: try 7b first, escalate to 14b if stuck
wolverine --model="ollama/qwen2.5-coder:7b,ollama/qwen2.5-coder:14b" buggy_script.py
# Ask for confirmation before each fix
wolverine --confirm buggy_script.py
# CI mode (exit 0 on success, exit 1 on failure)
wolverine --ci --max-iter=5 buggy_script.py
# Revert to the original file (before wolverine touched it)
wolverine --revert buggy_script.py
# Short alias
wlvr buggy_script.py
Supported languages
| Extension | Runtime required |
|---|---|
.py |
Python (auto-detected) |
.js |
Node.js |
.ts |
Node.js + ts-node |
.sh |
bash |
.rb |
Ruby |
.go |
Go |
Configuration
Environment variables (.env or shell)
| Variable | Default | Description |
|---|---|---|
DEFAULT_MODEL |
ollama/qwen2.5-coder:14b |
Model or comma-separated cascade |
WOLVERINE_MAX_ITER |
10 |
Max fix attempts |
VALIDATE_JSON_RETRY |
3 |
Max LLM retries on bad JSON |
OLLAMA_BASE_URL |
http://localhost:11434 |
Ollama server URL |
OPENAI_API_KEY |
— | Required for OpenAI models |
ANTHROPIC_API_KEY |
— | Required for Anthropic models |
XAI_API_KEY |
— | Required for Grok/xAI models |
Project config (wolverine.toml)
[wolverine]
# Cascade: try fast local model first, escalate to heavier one on stall
model = "ollama/qwen2.5-coder:7b,ollama/qwen2.5-coder:14b"
max_iter = 10
confirm = false
max_json_retry = 3
CLI flags always override wolverine.toml, which overrides .env.
Using cloud models
wolverine-x supports any provider that litellm supports.
# OpenAI
OPENAI_API_KEY=sk-... wolverine --model=gpt-4o buggy_script.py
# Anthropic
ANTHROPIC_API_KEY=sk-ant-... wolverine --model=claude-opus-4-6 buggy_script.py
# Grok/xAI
XAI_API_KEY=xai-... wolverine --model=xai/grok-3 buggy_script.py
# Local → cloud cascade (free first, pay only if needed)
wolverine --model="ollama/qwen2.5-coder:14b,gpt-4o" buggy_script.py
How it works
1. wolverine backs up your script (.bak)
2. Runs the script
3. If it crashes → sends current file + error to the LLM
4. LLM responds with a JSON list of line-level edits (Replace / Delete / InsertAfter)
5. Edits are applied, script reruns
6. Repeat until success, max iterations reached, or stall detected
7. On failure → original file restored from backup
The conversation history is preserved across iterations, so the LLM sees what it already tried — no fix is applied twice.
Development
git clone https://github.com/Mrawdian/wolverine
cd wolverine
python -m venv venv && source venv/bin/activate # Windows: venv\Scripts\activate
pip install -e ".[dev]"
# Run tests
pytest
# Lint
ruff check .
# Try it
wolverine examples/buggy_script_simple.py 10
Credits
Original concept and implementation: @biobootloader — biobootloader/wolverine
Revival and v0.2.0: multi-provider support, local LLM, model cascade, stall detection, multi-language.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wolverine_x-0.2.0-py3-none-any.whl.
File metadata
- Download URL: wolverine_x-0.2.0-py3-none-any.whl
- Upload date:
- Size: 11.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b7225d6981b11c218393b828d1a42a49af8f26710b80af1c66767d0a1f7bc495
|
|
| MD5 |
53a9abfb12aae9167b11093dc9205432
|
|
| BLAKE2b-256 |
bd8b776393a2a4b7bc073ce01f0a21dc56022d61ce1447f6cd3df248593f729a
|