Measure worst-case context exposure in your Claude / Cursor setup.
Project description
promptc
Measure worst-case skill-context exposure in your Claude Code setup.
Status: v0.1.0 in development. Not yet published to PyPI; install from source.
What this is for
If you've been collecting Claude Code skills for a while, your .claude/
has probably grown faster than your attention to it. The same security
rule lives in three different SKILL.md files. A skill description has
gone stale and Claude is loading the whole 5,000-token body just to
figure out it doesn't apply. There's no quick way to see this.
promptc analyze is a local-only audit: it walks your .claude/,
catches duplicated paragraphs across skills, computes a worst-case
"if Claude loads the body instead of just the description" multiplier,
and gives you a Grade — A through F — so you can tell at a glance
whether your setup is clean or has accumulated context debt.
All analysis runs locally. No data leaves your machine. No API keys required.
Install
git clone https://github.com/edenfunf/promptc
cd promptc
pip install -e .
(PyPI release lands with v0.1.1.)
Usage
promptc analyze . # scans ./.claude/ if present, else .
promptc analyze .claude/
promptc analyze . --no-html --format json
Run promptc analyze --help for all flags (--threshold, --min-words,
--exclude, --format, --open, …).
Sample output
Running on the included demo fixture:
$ promptc analyze examples/bloated-demo
+------------------------------------------------------------------------+
| |
| D+ |
| |
| 461 tokens of duplicate content across 5 files |
| (32% of 1,446 scanned tokens) |
| |
| Plus 23.7x worst-case context exposure on top. |
| |
| Top offenders below. |
| |
+------------------------------------------------------------------------+
File Role Total Body Desc Dup Dup%
skills/code-review/SKILL.md skill 286 270 8 38 14%
skills/python-style/SKILL.md skill 290 272 10 129 47%
skills/security/SKILL.md skill 314 291 16 99 34%
skills/sql-safety/SKILL.md skill 284 266 9 134 50%
skills/testing/SKILL.md skill 272 251 14 61 24%
Top duplicate groups:
1. 244 tokens wasted (5 chunks, exact): all 5 SKILL.md files share
an identical "Boundary discipline" paragraph
2. 152 tokens wasted (5 chunks, near): an SQL-injection-prevention
rule has been pasted across all 5 skills with minor wording tweaks
...
Full report: ./promptc-report.html
The HTML report adds a hero panel, a per-skill exposure breakdown, a side-by-side duplicate-rule view, and a methodology section that explains every formula.
What promptc does NOT do
- Cursor
.cursor/rules/*.mdc: not yet scanned. If.cursor/is detected next to.claude/, the CLI prints a warning so the gap is explicit. Tracked for v0.2. - Token-cost dollars: report speaks tokens; conversion to a $ figure for a specific Claude pricing tier is a v0.2 nice-to-have.
- Auto-fix: promptc diagnoses, you decide. There is no
promptc applythat rewrites your skills. - Telemetry / phone-home: nothing. Local files in, local report out.
Glossary
- SKILL.md — the entrypoint file for a Claude Code skill. Lives at
.claude/skills/<name>/SKILL.md. Only this file gets theskillrole; supporting files (templates, references, examples) under the same directory are scanned asother. - Duplicate-content ratio (a.k.a. bloat ratio) — share of total scanned tokens that promptc flagged as a near-duplicate of content elsewhere. Includes duplicates from supporting docs, not just SKILL.md bodies. Drives the Grade.
- Promised load — tokens for a SKILL.md's
descriptionfrontmatter value. This is what Claude Code's docs say loads at session start. - Worst-case load — tokens for a SKILL.md's body. The upper bound cost if the body ends up in context instead of just the description.
- Exposure multiplier —
body_tokens / description_tokens. Both sides are content-only; frontmatter overhead is excluded from both. - Insufficient state — promptc shows "Not enough to audit yet" when there are fewer than 3 SKILL.md files OR fewer than 1,000 aggregate body tokens across skills. Avoids misleading A+ grades on tiny setups.
The HTML report has a "How this is graded" section with the full formulae and threshold table.
Roadmap
- v0.1.x —
--output PATHflag; cross-language SDK path detector (down-weights duplicate clusters whose paths only differ in/python/,/go/, etc); broader tokenizer calibration across CJK and pure-code corpora (initial English mixed prose/code sample shows cl100k_base underestimates Claude tokens by ~18%). - v0.2 —
.cursor/rules/*.mdcscanning; methodology calibration (replace heuristic A/B/C/D/F thresholds with a reference distribution derived from real.claude/directories);--watch; share affordance for the HTML report. - v0.3+ — semantic dedup (embedding-based, not just Jaccard);
cross-binding language detector;
$cost conversion.
Development
pip install -e ".[dev]"
pytest
ruff check .
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptc-0.1.0.tar.gz.
File metadata
- Download URL: promptc-0.1.0.tar.gz
- Upload date:
- Size: 1.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
26e0fade5a4e491d8d903cb473f1dc7c4ffdaa934c1c863e460f4eb3986b29d5
|
|
| MD5 |
a7a2a14d1f45a9c69272e686d1ab6201
|
|
| BLAKE2b-256 |
4d8682cc2709455d51e463e9f0565adea67bfb18911f3b0e6738d56c8042c916
|
File details
Details for the file promptc-0.1.0-py3-none-any.whl.
File metadata
- Download URL: promptc-0.1.0-py3-none-any.whl
- Upload date:
- Size: 43.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1e56c35b592ba765b5f2530afcd022b6853fad1096795fb000b136f95bc584b8
|
|
| MD5 |
9f691bb8775f16cc5383031905696abd
|
|
| BLAKE2b-256 |
8b437aaeadb0cfdddc8d223bdf6788446ee2d51080bfe433e38639dbfde309f2
|