Explains your terminal errors in plain English. Fully local, no API key.
Project description
$ python app.py
TypeError: unsupported operand type(s) for +: 'int' and 'str' [line 42]
$ sherpa
sherpa is thinking...
โญโ Why it failed โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ You're trying to add an integer and a string at line 42. โ
โ Python requires both sides of + to be the same type โ โ
โ it won't auto-convert like JavaScript does. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโ Fix โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ total + int(user_input) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Install
pip install sherpa-dev
Or from source:
git clone https://github.com/RishiiGamer2201/sherpa
cd sherpa
pip install -e .
First Run
sherpa
On first run, Sherpa will prompt you to download a local AI model (~4GB). After that, everything runs offline โ no internet, no API key, no external server. Ever.
Usage
# Explain last terminal error (default)
sherpa
# Explain a specific line in a file
sherpa explain app.py:42
# Ask a freeform question
sherpa ask why is my API returning 403 only in production
# Show current config
sherpa cfg show
# Switch to a different model
sherpa cfg set-model /path/to/custom-model.gguf
Why Sherpa?
Every developer hits errors in their terminal every day. The usual workflow:
- Read the error โ feel confused
- Copy the error โ open browser โ Google/ChatGPT โ read results โ come back
That's a context switch. You leave your flow, lose your mental state, and waste 3โ5 minutes on something that should take 5 seconds.
Sherpa eliminates that loop. The explanation and fix come to you, right where the error happened.
๐ Your code never leaves your machine. Sherpa runs entirely locally using a quantized AI model. No data is sent anywhere. Ever.
How It Works
sherpa (you type this)
โ
โโ config.py โ checks if model exists
โโ setup.py โ downloads model on first run
โโ history.py โ reads last command + stderr from shell history
โโ ai.py โ loads local model, runs inference
โโ display.py โ prints explanation + fix with rich styling
| Component | Library | Why |
|---|---|---|
| CLI | click |
Clean command routing, auto help text |
| Output | rich |
Colors, panels, syntax highlighting, progress bars |
| AI | llama-cpp-python |
Runs .gguf models inline, no server needed |
| Model | CodeLlama 7B Q4 | Code-optimized, ~4GB, runs on CPU with 8GB RAM |
Supported Models
| Model | Size | Best for |
|---|---|---|
codellama-7b-instruct.Q4_K_M.gguf |
4GB | Default โ code-specific, fast |
deepseek-coder-6.7b.Q4_K_M.gguf |
4GB | Slightly better on debug tasks |
mistral-7b-instruct.Q4_K_M.gguf |
4GB | Good general fallback |
gemma-2b-it.Q4_K_M.gguf |
1.6GB | Low RAM machines (4GB or less) |
llama3.2-3b-instruct.Q4_K_M.gguf |
2GB | Fast, decent quality, mid-range |
Switch models anytime:
sherpa cfg set-model /path/to/model.gguf
Comparison
| Tool | Leaves Terminal? | Explains Why? | Works Offline? | Needs API Key? |
|---|---|---|---|---|
| Sherpa | โ No | โ Yes | โ Yes | โ No |
| Stack Overflow | โ Yes | Sometimes | โ No | โ |
| ChatGPT / Claude | โ Yes | โ Yes | โ No | โ Yes |
| GitHub Copilot | N/A (IDE) | โ Yes | โ No | โ Yes |
thefuck |
โ No | โ No | โ Yes | โ No |
Supported Shells
- โ Bash
- โ Zsh
- โ Fish
Requirements
- Python 3.10+
- 8GB RAM (for default 7B model)
- ~4GB disk space for the model
Contributing
See CONTRIBUTING.md for open tasks and guidelines.
License
Built with Python and llama-cpp-python. Fully local. Your code never leaves your machine.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sherpa_dev-0.1.4.tar.gz.
File metadata
- Download URL: sherpa_dev-0.1.4.tar.gz
- Upload date:
- Size: 15.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eb1a8a8a54d2fa19a5bf8e11e33c8e80ce2bd2e2157489b0b02588ebbf5241bb
|
|
| MD5 |
4c3315ae86124e97e7babbf65fbd1e21
|
|
| BLAKE2b-256 |
11c7af649e56aed6ca34a31e236efd82e9f41117615adcfa9b2c990c772be1dd
|
File details
Details for the file sherpa_dev-0.1.4-py3-none-any.whl.
File metadata
- Download URL: sherpa_dev-0.1.4-py3-none-any.whl
- Upload date:
- Size: 13.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fa2358dc30c72c4f4fb6dd871ff9a83ba0f1f35ab573121b59f510f389cb936a
|
|
| MD5 |
a3c90a0389cf0de5046a25d116b90d43
|
|
| BLAKE2b-256 |
8d151afb5adf1dc2e75049fb58b324398737af77bd1d1547dfd4d5f9a22dbb9d
|