Skip to main content

Discover, analyze, and optimize your prompts from AI coding sessions

Project description

re:prompt

Score, rewrite, and optimize your AI prompts -- the only CLI that improves your prompts automatically. No LLM needed.

PyPI version Python 3.10+ License: MIT Tests Coverage


reprompt demo

See it in action

$ pip install reprompt-cli

# Rewrite a weak prompt into a better one (no LLM, rule-based)
$ reprompt rewrite "I was wondering if you could maybe help me fix the auth bug"
  34  52 (+18)

  ╭─ Rewritten ────────────────────────────────────────────────╮
   Help me fix the auth bug.                                    ╰────────────────────────────────────────────────────────────╯

  Changes
   Removed filler (24% shorter)
   Removed hedging language

  You should also
   Add actual code snippets or error messages for context
   Reference specific files or functions by name
   Add constraints (e.g., "Do not modify existing tests")

# Full diagnostic in one command
$ reprompt check "Fix the auth bug in src/login.ts where JWT expires"
  GOOD · 58

  Clarity     ████████████░░░░░░░░ 15/25
  Context     ████████████████░░░░ 20/25
  Position    ████████████████████ 20/20
  Structure   ░░░░░░░░░░░░░░░░░░░░  0/15
  Repetition  ███░░░░░░░░░░░░░░░░░  3/15

  Strengths
   Key instruction at the start  optimal placement
   References specific files

  Improve
   Add the actual error message (+6 pts)
   Add constraints like "Don't modify tests" (+5 pts)

What it does

Analyze

Command Description
reprompt Instant dashboard -- prompts, sessions, avg score, top categories
reprompt scan Auto-discover prompts from 9 AI tools
reprompt check "prompt" Full diagnostic -- score + lint + rewrite preview in one command
reprompt score "prompt" Research-backed 0-100 scoring with 30+ features
reprompt compare "a" "b" Side-by-side prompt analysis (or --best-worst for auto-selection)
reprompt insights Personal patterns vs research-optimal benchmarks
reprompt style Prompting fingerprint with --trends for evolution tracking
reprompt agent Agent workflow analysis -- error loops, tool patterns, session efficiency
reprompt sessions Session quality scores with frustration signal detection
reprompt repetition Cross-session repetition detection -- spot recurring prompts
reprompt patterns Personal prompt weaknesses -- recurring gaps by task type
reprompt projects Per-project quality breakdown -- sessions, scores, frustration signals

Optimize

Command Description
reprompt build "task" Build prompts from components -- task, context, files, errors, constraints. Model-aware (Claude/GPT/Gemini)
reprompt rewrite "prompt" Rewrite prompts to score higher -- filler removal, restructuring, hedging cleanup
reprompt compress "prompt" 4-layer prompt compression (40-60% token savings typical)
reprompt distill Extract important turns from conversations with 6-signal scoring
reprompt distill --export Recover context when a session runs out -- paste into new session
reprompt lint Configurable prompt quality linter with CI/GitHub Action support
reprompt init Generate .reprompt.toml config for your project

Manage

Command Description
reprompt privacy See what data you sent where -- file paths, errors, PII exposure
reprompt privacy --deep Scan for sensitive content: API keys, tokens, passwords, PII
reprompt report Full analytics: hot phrases, clusters, patterns (--html for dashboard)
reprompt digest Weekly summary comparing current vs previous period
reprompt wrapped Prompt DNA report -- persona, scores, shareable card
reprompt template save|list|use Save and reuse your best prompts

Prompt Science

Scoring is calibrated against 10 peer-reviewed papers covering 30+ features across 5 dimensions:

Dimension What it measures Key papers
Structure Markdown, code blocks, explicit constraints Prompt Report (2406.06608)
Context File paths, error messages, I/O specs, edge cases Zi+ (2508.03678), Google (2512.14982)
Position Instruction placement relative to context Stanford (2307.03172), Veseli+ (2508.07479), Chowdhury (2603.10123)
Repetition Redundancy that degrades model attention Google (2512.14982)
Clarity Readability, sentence length, ambiguity SPELL (EMNLP 2023), PEEM (2603.10477)

Cross-validated findings that inform our engine:

  • Position bias is architectural — present at initialization, not learned. Front-loading instructions is effective for prompts under 50% of context window (3 papers agree)
  • Moderate compression improves output — rule-based filler removal doesn't just save tokens, it enhances LLM performance (2505.00019)
  • Prompt quality is independently measurable — prompt-only scoring predicts output quality without seeing the response (ACL 2025, 2503.10084)

All analysis runs locally in <1ms per prompt. No LLM calls, no network requests.

Conversation Distillation

reprompt distill scores every turn in a conversation using 6 signals:

  • Position -- first/last turns carry framing and conclusions
  • Length -- substantial turns contain more information
  • Tool trigger -- turns that cause tool calls are action-driving
  • Error recovery -- turns that follow errors show problem-solving
  • Semantic shift -- topic changes mark conversation boundaries
  • Uniqueness -- novel phrasing vs repetitive follow-ups

Session type (debugging, feature-dev, exploration, refactoring) is auto-detected and signal weights adapt accordingly.

Supported AI tools

Tool Format Auto-discovered by scan
Claude Code JSONL Yes
Codex CLI JSONL Yes
Cursor .vscdb Yes
Aider Markdown Yes
Gemini CLI JSON Yes
Cline (VS Code) JSON Yes
OpenClaw / OpenCode JSON Yes
ChatGPT JSON Via reprompt import
Claude.ai JSON/ZIP Via reprompt import

Installation

pip install reprompt-cli            # core (all features, zero config)
pip install reprompt-cli[chinese]   # + Chinese prompt analysis (jieba)
pip install reprompt-cli[mcp]       # + MCP server for Claude Code / Continue.dev / Zed

Quick start

reprompt check "your prompt here"   # full diagnostic — score + lint + rewrite
reprompt scan                       # discover prompts from installed AI tools
reprompt                            # see your dashboard

Auto-scan after every session

reprompt install-hook               # adds post-session hook to Claude Code

Browser extension

Capture prompts from ChatGPT, Claude.ai, and Gemini directly in your browser. Live score badge shows prompt quality as you type — click "Rewrite & Apply" to improve your prompt and replace the text directly in the input box.

  1. Install the extension from Chrome Web Store or Firefox Add-ons
  2. Connect to the CLI: reprompt install-extension
  3. Verify: reprompt extension-status

Captured prompts sync locally via Native Messaging -- nothing leaves your machine.

CI integration

GitHub Action

# .github/workflows/prompt-lint.yml
name: Prompt Quality
on: pull_request

jobs:
  lint:
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write    # needed for PR comments
    steps:
      - uses: actions/checkout@v4
      - uses: reprompt-dev/reprompt@main
        with:
          score-threshold: 50   # fail if avg prompt score < 50
          strict: true          # fail on warnings too
          comment-on-pr: true   # post quality report as PR comment

When comment-on-pr: true, every PR gets a quality report:

## reprompt lint 🟢 Passed

| Metric          | Value          |
|-----------------|----------------|
| Prompts checked | 12             |
| Errors          | 0              |
| Warnings        | 2              |
| Avg Score       | 62/100 ✅ (threshold: 50) |

📋 2 violation(s) [click to expand]

The comment updates on each push — no duplicates. Uses GITHUB_TOKEN (no extra secrets needed).

pre-commit

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/reprompt-dev/reprompt
    rev: v2.2.2
    hooks:
      - id: reprompt-lint

Direct CLI

reprompt lint --score-threshold 50  # exit 1 if avg score < 50
reprompt lint --strict              # exit 1 on warnings
reprompt lint --json                # machine-readable output

Project configuration

reprompt init   # generates .reprompt.toml with all rules documented
# .reprompt.toml (or [tool.reprompt.lint] in pyproject.toml)
[lint]
score-threshold = 50       # fail if avg score < 50

[lint.rules]
min-length = 20            # error if prompt < 20 chars (0 = off)
short-prompt = 40          # warning if < 40 chars (0 = off)
vague-prompt = true        # error on "fix it" etc (false = off)
debug-needs-reference = true

Privacy

  • All analysis runs locally. No prompts leave your machine.
  • reprompt privacy shows exactly what you've sent to which AI tool.
  • Optional telemetry sends only anonymous 26-dimension feature vectors -- never prompt text.
  • Open source: audit exactly what's collected.

Privacy policy

Links

Contributing

See CONTRIBUTING.md for development setup and guidelines.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reprompt_cli-2.5.0.tar.gz (3.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reprompt_cli-2.5.0-py3-none-any.whl (298.2 kB view details)

Uploaded Python 3

File details

Details for the file reprompt_cli-2.5.0.tar.gz.

File metadata

  • Download URL: reprompt_cli-2.5.0.tar.gz
  • Upload date:
  • Size: 3.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for reprompt_cli-2.5.0.tar.gz
Algorithm Hash digest
SHA256 2a274b820e3cd72ac6e585bd423bf6e29dbd77ac13931476dc6bc81e6fc3e5ab
MD5 eb81ee6db7af1c0f391f232d5a1eeaf1
BLAKE2b-256 a12142b6f3bbaef5b779617d56037a99b4b68fd168caed9ec9883fb7d77111a0

See more details on using hashes here.

File details

Details for the file reprompt_cli-2.5.0-py3-none-any.whl.

File metadata

  • Download URL: reprompt_cli-2.5.0-py3-none-any.whl
  • Upload date:
  • Size: 298.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for reprompt_cli-2.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f1814e9e424cf07e23d49427499a5e1a97f76987cc2e79fc67d76f4f597507da
MD5 e8d4c463fc46e3a4e1a41e9a27d945b4
BLAKE2b-256 128659eae1838a433d245830d14c38750432da02007e316d35b9a19f7c2b6b30

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page