Skip to main content

MCP server that intercepts and optimizes prompts in Claude Code and Cursor

Project description

PrePrompt

Your prompts, battle-tested. An MCP server that intercepts and optimizes prompts in Claude Code and Cursor before they reach the LLM.

What it does

Most prompts sent to an LLM are underspecified — they're missing context, output format expectations, or technical constraints that the developer has in their head but didn't type. PrePrompt sits between your keyboard and the model, scores every prompt with a heuristic classifier, and rewrites the complex ones using Claude Haiku before the main model ever sees them. Simple prompts ("what is jwt") pass through untouched in under 1ms. No API cost, no latency, no noise.

How it works

BEFORE  write a function that handles youtube oauth token refresh
        and manages expired credentials with error handling

AFTER   Write a Python function for FastAPI that handles YouTube OAuth 2.0
        token refresh. The function should: (1) detect expired credentials
        by checking the expiry timestamp, (2) use the refresh token to
        obtain a new access token via the YouTube API, (3) update and
        persist the new credentials (consider using a database or file
        storage), and (4) include comprehensive error handling for invalid
        refresh tokens, network failures, and API errors with appropriate
        logging and exception types.

When a prompt is intercepted you see this in your terminal:

╔═ PrePrompt +58 ════════════════════════════════════════════╗
║ The rewritten prompt specifies the technical               ║
║ implementation details, clarifies the complete workflow,   ║
║ and adds concrete error scenarios and storage              ║
║ considerations relevant to FastAPI applications.           ║
╠════════════════════════════════════════════════════════════╣
║ ORIGINAL  write a function that handles youtube oauth t... ║
║ OPTIMIZED Write a Python function for FastAPI that handles ║
║           YouTube OAuth 2.0 token refresh. The function    ║
║           should: (1) detect expired credentials by        ║
║           checking the expiry timestamp...                 ║
╚════════════════════════════════════════════════════════════╝

Install

git clone https://github.com/yashdeeptehlan/preprompt
cd preprompt
./scripts/install.sh

Or manually:

pip install -e .
cp .env.example .env    # add your ANTHROPIC_API_KEY
python scripts/setup_global_hook.py
python scripts/install_cursor.py

How the smart classifier works

  • Pure heuristics, zero API calls — runs on every prompt in under 1ms
  • Scores each prompt based on: ambiguity verbs, multi-requirement density, turn depth, and missing output format signals
  • Only intercepts when score ≥ 38 — simple prompts always pass through
  • Negative signals: short prompts, lookup questions (what is, what does), already-structured prompts all score low and get skipped
SCORE  INTERCEPT  PROMPT
   48  YES        write me a middleware that validates tokens and handles refresh
  -35  no         what is jwt
  -45  no         thanks
   65  YES        refactor this to handle edge cases and manage errors properly
  -20  no         add tests
   70  YES        implement a rate limiter that tracks requests, manages quotas...

Stack memory

PrePrompt learns your stack as you work. After a few sessions it knows your language, framework, and style preferences and injects that context into every optimization automatically.

$ preprompt-memory
 PrePrompt — learned stack memory
──────────────────────────────────────────────────────
  language     python           confidence: 0.92  (seen 47x)
  framework    fastapi          confidence: 0.88  (seen 31x)
  database     postgresql       confidence: 0.74  (seen 12x)

CLI

preprompt-history          # recent prompt events across all sessions
preprompt-stats            # optimization stats (total, intercepted, avg score)
preprompt-memory           # learned stack context
preprompt-test-classifier  # test classifier on sample prompts

Cost

PrePrompt uses claude-haiku-4-5 for optimization — the cheapest Claude model. Typical cost: ~$0.001 per intercepted prompt. At 20 complex prompts per day that's roughly $0.60/month. Simple prompts are never sent to the API.

Architecture

Claude Code / Cursor
        │
        ▼  UserPromptSubmit hook
  pre_prompt.py
  ├── classify_prompt()     ← pure heuristic, <1ms, no API
  ├── [score < 38] ──────► pass through unchanged
  └── [score ≥ 38]
      ├── optimize()        ← Haiku API call with stack context
      ├── write sidecar     ← ~/.preprompt/pending/<uuid>.json
      └── return optimized prompt
        │
        ▼  MCP server (on next tool call)
  flush_pending_hook_events()  ← sidecars → SQLite
  save_prompt_event()
  update_memory_from_prompt()

The hook never holds a database connection — it writes a small JSON sidecar file so there's no lock conflict with the MCP server.

MCP Tools

Tool Parameters Returns
optimize_prompt user_prompt, conversation_history, turn_number {optimized_prompt, was_intercepted, score, reason}
get_prompt_history limit (default 20) list of recent prompt events for this session

Environment variables

Variable Default Description
ANTHROPIC_API_KEY Required for optimizer
MCP_TRANSPORT stdio stdio or sse

Storage is always at ~/.preprompt/history.db (created automatically).

Requirements

License

MIT — see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

preprompt-0.1.0.tar.gz (39.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

preprompt-0.1.0-py3-none-any.whl (18.9 kB view details)

Uploaded Python 3

File details

Details for the file preprompt-0.1.0.tar.gz.

File metadata

  • Download URL: preprompt-0.1.0.tar.gz
  • Upload date:
  • Size: 39.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for preprompt-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1d8fb18755a6a74361d97520f6bca0bb1a995dd16180ecfa0fe879e6c83b2e4d
MD5 966b65c766958c3080fc8d80b5a22e22
BLAKE2b-256 7f2829e576eb95542a9cf205e98645e90ac91899165ab3691bada356a1278808

See more details on using hashes here.

Provenance

The following attestation bundles were made for preprompt-0.1.0.tar.gz:

Publisher: publish.yml on yashdeeptehlan/preprompt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file preprompt-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: preprompt-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 18.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for preprompt-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9c9aee463aab9819614c5471ec28bdad27654ef8593d134c756e584f315ab9ac
MD5 b23bcda4cab375145970f73f3e0f5332
BLAKE2b-256 12d6ab45896ec7eae8066a68f2db98864ef97ba71f983f70171b1b9ef970fd2a

See more details on using hashes here.

Provenance

The following attestation bundles were made for preprompt-0.1.0-py3-none-any.whl:

Publisher: publish.yml on yashdeeptehlan/preprompt

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page