Skip to main content

MCP server that intercepts and optimizes prompts in Claude Code and Cursor

Project description

PrePrompt

PyPI CI License: MIT

Your prompts, battle-tested. An MCP server that intercepts and optimizes prompts in Claude Code and Cursor before they reach the LLM.

What it does

Most prompts sent to an LLM are underspecified — they're missing context, output format expectations, or technical constraints that the developer has in their head but didn't type. PrePrompt sits between your keyboard and the model, scores every prompt with a heuristic classifier, and rewrites the complex ones using Claude Haiku before the main model ever sees them. Simple prompts ("what is jwt") pass through untouched in under 1ms. No API cost, no latency, no noise.

How it works

BEFORE  write a function that handles youtube oauth token refresh
        and manages expired credentials with error handling

AFTER   Write a Python function for FastAPI that handles YouTube OAuth 2.0
        token refresh. The function should: (1) detect expired credentials
        by checking the expiry timestamp, (2) use the refresh token to
        obtain a new access token via the YouTube API, (3) update and
        persist the new credentials (consider using a database or file
        storage), and (4) include comprehensive error handling for invalid
        refresh tokens, network failures, and API errors with appropriate
        logging and exception types.

When a prompt is intercepted you see this in your terminal:

╔═ PrePrompt +58 ════════════════════════════════════════════╗
║ The rewritten prompt specifies the technical               ║
║ implementation details, clarifies the complete workflow,   ║
║ and adds concrete error scenarios and storage              ║
║ considerations relevant to FastAPI applications.           ║
╠════════════════════════════════════════════════════════════╣
║ ORIGINAL  write a function that handles youtube oauth t... ║
║ OPTIMIZED Write a Python function for FastAPI that handles ║
║           YouTube OAuth 2.0 token refresh. The function    ║
║           should: (1) detect expired credentials by        ║
║           checking the expiry timestamp...                 ║
╚════════════════════════════════════════════════════════════╝

Install

pip (recommended):

pip install preprompt
python scripts/setup_global_hook.py
# Restart Claude Code or Cursor

From source:

git clone https://github.com/yashdeeptehlan/preprompt
cd preprompt && ./scripts/install.sh

How the smart classifier works

  • Pure heuristics, zero API calls — runs on every prompt in under 1ms
  • Scores each prompt based on: ambiguity verbs, multi-requirement density, turn depth, and missing output format signals
  • Only intercepts when score ≥ 38 — simple prompts always pass through
  • Negative signals: short prompts, lookup questions (what is, what does), already-structured prompts all score low and get skipped
SCORE  INTERCEPT  PROMPT
   48  YES        write me a middleware that validates tokens and handles refresh
  -35  no         what is jwt
  -45  no         thanks
   65  YES        refactor this to handle edge cases and manage errors properly
  -20  no         add tests
   70  YES        implement a rate limiter that tracks requests, manages quotas...

Stack memory

PrePrompt learns your stack as you work. After a few sessions it knows your language, framework, and style preferences and injects that context into every optimization automatically.

$ preprompt-memory
 PrePrompt — learned stack memory
──────────────────────────────────────────────────────
  language     python           confidence: 0.92  (seen 47x)
  framework    fastapi          confidence: 0.88  (seen 31x)
  database     postgresql       confidence: 0.74  (seen 12x)

CLI

preprompt-install          # one-command setup (API key + hooks)
preprompt-history          # recent prompt events across all sessions
preprompt-stats            # optimization stats (total, intercepted, avg score)
preprompt-memory           # learned stack context
preprompt-test-classifier  # test classifier on sample prompts
preprompt-feedback         # rate recent optimizations (builds accept/reject stats)
preprompt-watch            # live feed of interceptions in a second terminal
preprompt-clip             # optimize clipboard (works on macOS, Windows, Linux)
preprompt-optimize "..."   # optimize a prompt from the command line

Cost

PrePrompt uses claude-haiku-4-5 for optimization — the cheapest Claude model. Typical cost: ~$0.001 per intercepted prompt. At 20 complex prompts per day that's roughly $0.60/month. Simple prompts are never sent to the API.

Architecture

Claude Code / Cursor
        │
        ▼  UserPromptSubmit hook
  pre_prompt.py
  ├── classify_prompt()     ← pure heuristic, <1ms, no API
  ├── [score < 38] ──────► pass through unchanged
  └── [score ≥ 38]
      ├── optimize()        ← Haiku API call with stack context
      ├── write sidecar     ← ~/.preprompt/pending/<uuid>.json
      └── return optimized prompt
        │
        ▼  MCP server (on next tool call)
  flush_pending_hook_events()  ← sidecars → SQLite
  save_prompt_event()
  update_memory_from_prompt()

The hook never holds a database connection — it writes a small JSON sidecar file so there's no lock conflict with the MCP server.

MCP Tools

Tool Parameters Returns
optimize_prompt user_prompt, conversation_history, turn_number {optimized_prompt, was_intercepted, score, reason}
get_prompt_history limit (default 20) list of recent prompt events for this session

Environment variables

Variable Default Description
ANTHROPIC_API_KEY Required for optimizer
MCP_TRANSPORT stdio stdio or sse

Storage is always at ~/.preprompt/history.db (created automatically).

Requirements

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

preprompt-0.1.4.tar.gz (54.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

preprompt-0.1.4-py3-none-any.whl (28.4 kB view details)

Uploaded Python 3

File details

Details for the file preprompt-0.1.4.tar.gz.

File metadata

  • Download URL: preprompt-0.1.4.tar.gz
  • Upload date:
  • Size: 54.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for preprompt-0.1.4.tar.gz
Algorithm Hash digest
SHA256 f7260617d88accc000417854f464b007d3343f2decef85b960c92896c9da111a
MD5 05c4f4902c6e66b7aa1a826338f1a563
BLAKE2b-256 63e1984095db2b184c10a4075f1df657ad23da2b72f6450269a8a3adbe119592

See more details on using hashes here.

Provenance

The following attestation bundles were made for preprompt-0.1.4.tar.gz:

Publisher: publish.yml on yashdeeptehlan/preprompt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file preprompt-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: preprompt-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 28.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for preprompt-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 ea1c0dd45c8cd5c999c9b275eb87bdfb59d6e14169a76c52ec62a63a242584aa
MD5 48b8a075ff5799bde74050dc9af6b080
BLAKE2b-256 f5c86fd5cbeb750cf3b3a3ffde3528bef9a4f7dedccab6770e152e2512a18fda

See more details on using hashes here.

Provenance

The following attestation bundles were made for preprompt-0.1.4-py3-none-any.whl:

Publisher: publish.yml on yashdeeptehlan/preprompt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page