Discover, analyze, and optimize your prompts from AI coding sessions
Project description
re:prompt
Score, rewrite, and optimize your AI prompts -- the only CLI that improves your prompts automatically. No LLM needed.
See it in action
$ pip install reprompt-cli
# Rewrite a weak prompt into a better one (no LLM, rule-based)
$ reprompt rewrite "I was wondering if you could maybe help me fix the auth bug"
34 → 52 (+18)
╭─ Rewritten ────────────────────────────────────────────────╮
│ Help me fix the auth bug. │
╰────────────────────────────────────────────────────────────╯
Changes
✓ Removed filler (24% shorter)
✓ Removed hedging language
You should also
→ Add actual code snippets or error messages for context
→ Reference specific files or functions by name
→ Add constraints (e.g., "Do not modify existing tests")
# Score any prompt instantly (research-backed, 30+ features)
$ reprompt score "Fix the auth bug in src/login.ts where JWT expires"
Score: 40/100 (Fair)
Tip: Include the error message -- debug prompts with errors are 3.7x more effective
# Compress prompts to save tokens
$ reprompt compress "I was wondering if you could please help me refactor this code. Basically what I need is to split this function into smaller helpers."
Before: 28 tokens → After: 14 tokens (50% saved)
# Your personal dashboard
$ reprompt
╭─ Prompt Dashboard ─────────────────────────────────────────╮
│ Prompts: 1,063 (295 unique) Sessions: 890 │
│ Avg Score: 68/100 Top: debug (31%), impl (24%)│
╰────────────────────────────────────────────────────────────╯
What it does
Analyze
| Command | Description |
|---|---|
reprompt |
Instant dashboard -- prompts, sessions, avg score, top categories |
reprompt scan |
Auto-discover prompts from 9 AI tools |
reprompt check "prompt" |
Full diagnostic -- score + lint + rewrite preview in one command |
reprompt score "prompt" |
Research-backed 0-100 scoring with 30+ features |
reprompt compare "a" "b" |
Side-by-side prompt analysis (or --best-worst for auto-selection) |
reprompt insights |
Personal patterns vs research-optimal benchmarks |
reprompt style |
Prompting fingerprint with --trends for evolution tracking |
reprompt agent |
Agent workflow analysis -- error loops, tool patterns, session efficiency |
reprompt sessions |
Session quality scores with frustration signal detection |
reprompt repetition |
Cross-session repetition detection -- spot recurring prompts |
reprompt projects |
Per-project quality breakdown -- sessions, scores, frustration signals |
Optimize
| Command | Description |
|---|---|
reprompt build "task" |
Build prompts from components -- task, context, files, errors, constraints. Model-aware (Claude/GPT/Gemini) |
reprompt rewrite "prompt" |
Rewrite prompts to score higher -- filler removal, restructuring, hedging cleanup |
reprompt compress "prompt" |
4-layer prompt compression (40-60% token savings typical) |
reprompt distill |
Extract important turns from conversations with 6-signal scoring |
reprompt distill --export |
Recover context when a session runs out -- paste into new session |
reprompt lint |
Configurable prompt quality linter with CI/GitHub Action support |
reprompt init |
Generate .reprompt.toml config for your project |
Manage
| Command | Description |
|---|---|
reprompt privacy |
See what data you sent where -- file paths, errors, PII exposure |
reprompt privacy --deep |
Scan for sensitive content: API keys, tokens, passwords, PII |
reprompt report |
Full analytics: hot phrases, clusters, patterns (--html for dashboard) |
reprompt digest |
Weekly summary comparing current vs previous period |
reprompt wrapped |
Prompt DNA report -- persona, scores, shareable card |
reprompt template save|list|use |
Save and reuse your best prompts |
Prompt Science
Scoring is calibrated against 4 research papers covering 30+ features across 5 dimensions:
| Dimension | What it measures | Paper |
|---|---|---|
| Structure | Markdown, code blocks, explicit constraints | Prompt Report 2406.06608 |
| Context | File paths, error messages, technical specificity | Google 2512.14982 |
| Position | Instruction placement relative to context | Stanford 2307.03172 |
| Repetition | Redundancy that degrades model attention | Google 2512.14982 |
| Clarity | Readability, sentence length, ambiguity | SPELL (EMNLP 2023) |
All analysis runs locally in <1ms per prompt. No LLM calls, no network requests.
Conversation Distillation
reprompt distill scores every turn in a conversation using 6 signals:
- Position -- first/last turns carry framing and conclusions
- Length -- substantial turns contain more information
- Tool trigger -- turns that cause tool calls are action-driving
- Error recovery -- turns that follow errors show problem-solving
- Semantic shift -- topic changes mark conversation boundaries
- Uniqueness -- novel phrasing vs repetitive follow-ups
Session type (debugging, feature-dev, exploration, refactoring) is auto-detected and signal weights adapt accordingly.
Supported AI tools
| Tool | Format | Auto-discovered by scan |
|---|---|---|
| Claude Code | JSONL | Yes |
| Codex CLI | JSONL | Yes |
| Cursor | .vscdb | Yes |
| Aider | Markdown | Yes |
| Gemini CLI | JSON | Yes |
| Cline (VS Code) | JSON | Yes |
| OpenClaw / OpenCode | JSON | Yes |
| ChatGPT | JSON | Via reprompt import |
| Claude.ai | JSON/ZIP | Via reprompt import |
Installation
pip install reprompt-cli # core (all features, zero config)
pip install reprompt-cli[chinese] # + Chinese prompt analysis (jieba)
pip install reprompt-cli[mcp] # + MCP server for Claude Code / Continue.dev / Zed
Quick start
reprompt scan # discover prompts from installed AI tools
reprompt # see your dashboard
reprompt score "your prompt here" # score any prompt instantly
reprompt distill --last 1 # distill your most recent conversation
Auto-scan after every session
reprompt install-hook # adds post-session hook to Claude Code
Browser extension
Capture prompts from ChatGPT, Claude.ai, and Gemini directly in your browser. Live score badge shows prompt quality as you type.
- Install the extension from Chrome Web Store or Firefox Add-ons
- Connect to the CLI:
reprompt install-extension - Verify:
reprompt extension-status
Captured prompts sync locally via Native Messaging -- nothing leaves your machine.
CI integration
GitHub Action
# .github/workflows/prompt-lint.yml
- uses: reprompt-dev/reprompt@main
with:
score-threshold: 50 # fail if avg prompt score < 50
strict: true # fail on warnings too
comment-on-pr: true # post quality report as PR comment
pre-commit
# .pre-commit-config.yaml
repos:
- repo: https://github.com/reprompt-dev/reprompt
rev: v2.2.1
hooks:
- id: reprompt-lint
Direct CLI
reprompt lint --score-threshold 50 # exit 1 if avg score < 50
reprompt lint --strict # exit 1 on warnings
reprompt lint --json # machine-readable output
Project configuration
reprompt init # generates .reprompt.toml with all rules documented
# .reprompt.toml (or [tool.reprompt.lint] in pyproject.toml)
[lint]
score-threshold = 50 # fail if avg score < 50
[lint.rules]
min-length = 20 # error if prompt < 20 chars (0 = off)
short-prompt = 40 # warning if < 40 chars (0 = off)
vague-prompt = true # error on "fix it" etc (false = off)
debug-needs-reference = true
Privacy
- All analysis runs locally. No prompts leave your machine.
reprompt privacyshows exactly what you've sent to which AI tool.- Optional telemetry sends only anonymous 26-dimension feature vectors -- never prompt text.
- Open source: audit exactly what's collected.
Links
- Website: getreprompt.dev
- Chrome Extension: Chrome Web Store
- Firefox Add-on: Firefox Add-ons
- PyPI: reprompt-cli
- Changelog: CHANGELOG.md
- Privacy: getreprompt.dev/privacy
Contributing
See CONTRIBUTING.md for development setup and guidelines.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file reprompt_cli-2.2.1.tar.gz.
File metadata
- Download URL: reprompt_cli-2.2.1.tar.gz
- Upload date:
- Size: 3.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a97acc3bed198a254398186b00c18f956d2f34155ee56006189e21f2db6ed850
|
|
| MD5 |
a0f4e220076bf2ef4d4abd085c17ffa8
|
|
| BLAKE2b-256 |
a01b9f767dab42046cd5f6b4b212521fa15026d4bcbec85f21f31235e608a2cb
|
File details
Details for the file reprompt_cli-2.2.1-py3-none-any.whl.
File metadata
- Download URL: reprompt_cli-2.2.1-py3-none-any.whl
- Upload date:
- Size: 291.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4d00afd1dcfef4e7762968e8c6d0aeebd3fe1e70686fdbb9e92cdc0fe074fd27
|
|
| MD5 |
15721bda21380df6a463e00172bf98da
|
|
| BLAKE2b-256 |
5cb34819330377cef88701e8643cfd00250b3152b5bc262742b3cc01bd9ccc3b
|