Skip to main content

MCP server that intercepts and optimizes prompts in Claude Code and Cursor

Project description

PrePrompt

PyPI CI License: MIT

Your prompts, battle-tested. An MCP server that intercepts and optimizes prompts in Claude Code and Cursor before they reach the LLM.

What it does

Most prompts sent to an LLM are underspecified — they're missing context, output format expectations, or technical constraints that the developer has in their head but didn't type. PrePrompt sits between your keyboard and the model, scores every prompt with a heuristic classifier, and rewrites the complex ones using Claude Haiku before the main model ever sees them. Simple prompts ("what is jwt") pass through untouched in under 1ms. No API cost, no latency, no noise.

How it works

BEFORE  write a function that handles youtube oauth token refresh
        and manages expired credentials with error handling

AFTER   Write a Python function for FastAPI that handles YouTube OAuth 2.0
        token refresh. The function should: (1) detect expired credentials
        by checking the expiry timestamp, (2) use the refresh token to
        obtain a new access token via the YouTube API, (3) update and
        persist the new credentials (consider using a database or file
        storage), and (4) include comprehensive error handling for invalid
        refresh tokens, network failures, and API errors with appropriate
        logging and exception types.

When a prompt is intercepted you see this in your terminal:

╔═ PrePrompt +58 ════════════════════════════════════════════╗
║ The rewritten prompt specifies the technical               ║
║ implementation details, clarifies the complete workflow,   ║
║ and adds concrete error scenarios and storage              ║
║ considerations relevant to FastAPI applications.           ║
╠════════════════════════════════════════════════════════════╣
║ ORIGINAL  write a function that handles youtube oauth t... ║
║ OPTIMIZED Write a Python function for FastAPI that handles ║
║           YouTube OAuth 2.0 token refresh. The function    ║
║           should: (1) detect expired credentials by        ║
║           checking the expiry timestamp...                 ║
╚════════════════════════════════════════════════════════════╝

Install

pip (recommended):

pip install preprompt
python scripts/setup_global_hook.py
# Restart Claude Code or Cursor

From source:

git clone https://github.com/yashdeeptehlan/preprompt
cd preprompt && ./scripts/install.sh

How the smart classifier works

  • Pure heuristics, zero API calls — runs on every prompt in under 1ms
  • Scores each prompt based on: ambiguity verbs, multi-requirement density, turn depth, and missing output format signals
  • Only intercepts when score ≥ 38 — simple prompts always pass through
  • Negative signals: short prompts, lookup questions (what is, what does), already-structured prompts all score low and get skipped
SCORE  INTERCEPT  PROMPT
   48  YES        write me a middleware that validates tokens and handles refresh
  -35  no         what is jwt
  -45  no         thanks
   65  YES        refactor this to handle edge cases and manage errors properly
  -20  no         add tests
   70  YES        implement a rate limiter that tracks requests, manages quotas...

Stack memory

PrePrompt learns your stack as you work. After a few sessions it knows your language, framework, and style preferences and injects that context into every optimization automatically.

$ preprompt-memory
 PrePrompt — learned stack memory
──────────────────────────────────────────────────────
  language     python           confidence: 0.92  (seen 47x)
  framework    fastapi          confidence: 0.88  (seen 31x)
  database     postgresql       confidence: 0.74  (seen 12x)

CLI

preprompt-install          # one-command setup (API key + hooks)
preprompt-history          # recent prompt events across all sessions
preprompt-stats            # optimization stats (total, intercepted, avg score)
preprompt-memory           # learned stack context
preprompt-test-classifier  # test classifier on sample prompts
preprompt-feedback         # rate recent optimizations (builds accept/reject stats)
preprompt-watch            # live feed of interceptions in a second terminal
preprompt-clip             # optimize clipboard (works on macOS, Windows, Linux)
preprompt-optimize "..."   # optimize a prompt from the command line

Cost

PrePrompt uses claude-haiku-4-5 for optimization — the cheapest Claude model. Typical cost: ~$0.001 per intercepted prompt. At 20 complex prompts per day that's roughly $0.60/month. Simple prompts are never sent to the API.

Architecture

Claude Code / Cursor
        │
        ▼  UserPromptSubmit hook
  pre_prompt.py
  ├── classify_prompt()     ← pure heuristic, <1ms, no API
  ├── [score < 38] ──────► pass through unchanged
  └── [score ≥ 38]
      ├── optimize()        ← Haiku API call with stack context
      ├── write sidecar     ← ~/.preprompt/pending/<uuid>.json
      └── return optimized prompt
        │
        ▼  MCP server (on next tool call)
  flush_pending_hook_events()  ← sidecars → SQLite
  save_prompt_event()
  update_memory_from_prompt()

The hook never holds a database connection — it writes a small JSON sidecar file so there's no lock conflict with the MCP server.

MCP Tools

Tool Parameters Returns
optimize_prompt user_prompt, conversation_history, turn_number {optimized_prompt, was_intercepted, score, reason}
get_prompt_history limit (default 20) list of recent prompt events for this session

Environment variables

Variable Default Description
ANTHROPIC_API_KEY Required for optimizer
MCP_TRANSPORT stdio stdio or sse

Storage is always at ~/.preprompt/history.db (created automatically).

Requirements

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

preprompt-0.1.3.tar.gz (51.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

preprompt-0.1.3-py3-none-any.whl (25.1 kB view details)

Uploaded Python 3

File details

Details for the file preprompt-0.1.3.tar.gz.

File metadata

  • Download URL: preprompt-0.1.3.tar.gz
  • Upload date:
  • Size: 51.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for preprompt-0.1.3.tar.gz
Algorithm Hash digest
SHA256 2a05fef97e05a63e3da4834eb5eaf1deb3077a1f95d39ab5d36f4027817c2c17
MD5 c08c43736f495968c097d79ca6e13682
BLAKE2b-256 a7201b7492edc4394fc8c4efc64603a3d28ca4261fd9db6ad8f6a63a6a8c21d2

See more details on using hashes here.

Provenance

The following attestation bundles were made for preprompt-0.1.3.tar.gz:

Publisher: publish.yml on yashdeeptehlan/preprompt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file preprompt-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: preprompt-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 25.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for preprompt-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 e333b343cb327bd0aac178fae836ac22f2e90e629100b622b4546b42de5d5f23
MD5 3c025cc9493ef724d4d2d7e48096159c
BLAKE2b-256 486f8b18e0d99d08d2a99b05d3e9961e00707864ef161d4e4c6eea85b2de9da6

See more details on using hashes here.

Provenance

The following attestation bundles were made for preprompt-0.1.3-py3-none-any.whl:

Publisher: publish.yml on yashdeeptehlan/preprompt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page