Discover, analyze, and optimize your prompts from AI coding sessions
Project description
re:prompt
Prompt Intelligence for AI power users -- understand, optimize, and manage your AI conversations.
See it in action
$ pip install reprompt-cli
$ reprompt
╭─ Prompt Dashboard ─────────────────────────────────────────╮
│ Prompts: 1,063 (295 unique) Sessions: 890 │
│ Avg Score: 68/100 Top: debug (31%), impl (24%)│
│ Sources: claude-code, cursor, chatgpt │
╰────────────────────────────────────────────────────────────╯
$ reprompt score "Fix the auth bug in src/login.ts where JWT expires"
Score: 74/100
Structure: 18/25 | Context: 22/25 | Position: 15/20 | Repetition: 9/15 | Clarity: 10/15
Tip: Add the error message for +15% accuracy
$ reprompt distill --last 3 --summary
Session: feature-dev (42 turns, 18 important)
Key moments: initial spec → auth module → test failures → JWT fix → passing
Context: "Building auth system with JWT refresh tokens for Express API"
$ reprompt compress "请帮我看一下这个代码,就是那个 login 的那个文件,好像有点问题"
Before: 31 tokens → After: 15 tokens (52% saved)
"看一下 login 文件的问题"
What it does
Analyze
| Command | Description |
|---|---|
reprompt |
Instant dashboard -- prompts, sessions, avg score, top categories |
reprompt scan |
Auto-discover prompts from 8 AI tools |
reprompt score "prompt" |
Research-backed 0-100 scoring with 30+ features |
reprompt compare "a" "b" |
Side-by-side prompt analysis (or --best-worst for auto-selection) |
reprompt insights |
Personal patterns vs research-optimal benchmarks |
reprompt style |
Prompting fingerprint with --trends for evolution tracking |
Optimize
| Command | Description |
|---|---|
reprompt compress "prompt" |
4-layer prompt compression (50%+ token savings typical) |
reprompt distill |
Extract important turns from conversations with 6-signal scoring |
reprompt distill --export |
Recover context when a session runs out -- paste into new session |
reprompt lint |
Prompt quality linter with GitHub Action support |
Manage
| Command | Description |
|---|---|
reprompt privacy |
See what data you sent where -- file paths, errors, PII exposure |
reprompt report |
Full analytics: hot phrases, clusters, patterns (--html for dashboard) |
reprompt digest |
Weekly summary comparing current vs previous period |
reprompt wrapped |
Prompt DNA report -- persona, scores, shareable card |
reprompt template save|list|use |
Save and reuse your best prompts |
Prompt Science
Scoring is calibrated against 4 research papers covering 30+ features across 5 dimensions:
| Dimension | What it measures | Paper |
|---|---|---|
| Structure | Markdown, code blocks, explicit constraints | Prompt Report 2406.06608 |
| Context | File paths, error messages, technical specificity | Google 2512.14982 |
| Position | Instruction placement relative to context | Stanford 2307.03172 |
| Repetition | Redundancy that degrades model attention | Google 2512.14982 |
| Clarity | Readability, sentence length, ambiguity | SPELL (EMNLP 2023) |
All analysis runs locally in <1ms per prompt. No LLM calls, no network requests.
Conversation Distillation
reprompt distill scores every turn in a conversation using 6 signals:
- Position -- first/last turns carry framing and conclusions
- Length -- substantial turns contain more information
- Tool trigger -- turns that cause tool calls are action-driving
- Error recovery -- turns that follow errors show problem-solving
- Semantic shift -- topic changes mark conversation boundaries
- Uniqueness -- novel phrasing vs repetitive follow-ups
Session type (debugging, feature-dev, exploration, refactoring) is auto-detected and signal weights adapt accordingly.
Supported AI tools
| Tool | Format | Auto-discovered by scan |
|---|---|---|
| Claude Code | JSONL | Yes |
| Cursor | .vscdb | Yes |
| Aider | Markdown | Yes |
| Gemini CLI | JSON | Yes |
| Cline (VS Code) | JSON | Yes |
| OpenClaw / OpenCode | JSON | Yes |
| ChatGPT | JSON | Via reprompt import |
| Claude.ai | JSON/ZIP | Via reprompt import |
Installation
pip install reprompt-cli # core (all features, zero config)
pip install reprompt-cli[chinese] # + Chinese prompt analysis (jieba)
pip install reprompt-cli[mcp] # + MCP server for Claude Code / Continue.dev / Zed
Quick start
reprompt scan # discover prompts from installed AI tools
reprompt # see your dashboard
reprompt score "your prompt here" # score any prompt instantly
reprompt distill --last 1 # distill your most recent conversation
Auto-scan after every session
reprompt install-hook # adds post-session hook to Claude Code
Privacy
- All analysis runs locally. No prompts leave your machine.
reprompt privacyshows exactly what you've sent to which AI tool.- Optional telemetry sends only anonymous 26-dimension feature vectors -- never prompt text.
- Open source: audit exactly what's collected.
Links
- Website: getreprompt.dev
- PyPI: reprompt-cli
- Changelog: CHANGELOG.md
- Privacy: getreprompt.dev/privacy
Contributing
See CONTRIBUTING.md for development setup and guidelines.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file reprompt_cli-1.5.0.tar.gz.
File metadata
- Download URL: reprompt_cli-1.5.0.tar.gz
- Upload date:
- Size: 746.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c1e035f5295b77905bb775f73ed1dde979e12aa72bae427344f83c37f9454ade
|
|
| MD5 |
9a9144f58219c94d3a558efed32f2668
|
|
| BLAKE2b-256 |
5ac04de6e63bcf85e50fa13e97e1a86f7b2b3072e9f808995495b37a91c3af76
|
Provenance
The following attestation bundles were made for reprompt_cli-1.5.0.tar.gz:
Publisher:
publish.yml on reprompt-dev/reprompt
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
reprompt_cli-1.5.0.tar.gz -
Subject digest:
c1e035f5295b77905bb775f73ed1dde979e12aa72bae427344f83c37f9454ade - Sigstore transparency entry: 1181343324
- Sigstore integration time:
-
Permalink:
reprompt-dev/reprompt@2d20b25d036c99810831478a98a6a2de1310bd29 -
Branch / Tag:
refs/tags/v1.5.0 - Owner: https://github.com/reprompt-dev
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@2d20b25d036c99810831478a98a6a2de1310bd29 -
Trigger Event:
push
-
Statement type:
File details
Details for the file reprompt_cli-1.5.0-py3-none-any.whl.
File metadata
- Download URL: reprompt_cli-1.5.0-py3-none-any.whl
- Upload date:
- Size: 240.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f43ba2844c29792ca1f4b8f27612007c88efaf9d7af09eb2da7e6dd60da7bd6
|
|
| MD5 |
2012a2fbc636f60987b66a224be9b6ee
|
|
| BLAKE2b-256 |
f2d2a6d847167d73efd141881467c18159a34019b04d7f9d0f30a13ef3b78af4
|
Provenance
The following attestation bundles were made for reprompt_cli-1.5.0-py3-none-any.whl:
Publisher:
publish.yml on reprompt-dev/reprompt
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
reprompt_cli-1.5.0-py3-none-any.whl -
Subject digest:
9f43ba2844c29792ca1f4b8f27612007c88efaf9d7af09eb2da7e6dd60da7bd6 - Sigstore transparency entry: 1181343330
- Sigstore integration time:
-
Permalink:
reprompt-dev/reprompt@2d20b25d036c99810831478a98a6a2de1310bd29 -
Branch / Tag:
refs/tags/v1.5.0 - Owner: https://github.com/reprompt-dev
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@2d20b25d036c99810831478a98a6a2de1310bd29 -
Trigger Event:
push
-
Statement type: