MCP server for analyzing and humanizing AI-generated text to bypass AI detection.
Project description
humanizer-mcp
An MCP (Model Context Protocol) server that measures AI-detection risk in a piece of text and tells you — line by line — what to change. Works with Claude Code, Claude Desktop, and any MCP-compatible client.
Rather than running your prose through a black-box "humanizer," this server analyzes it against known detection signals (vocabulary, burstiness, contraction usage, paragraph uniformity, em dashes, first-person voice) and returns a structured report with a 0–100 risk score and a concrete rewrite plan. The actual rewriting is left to the LLM that's driving the conversation — which is the point: a planner, not a laundering service.
Tools
| Tool | What it does |
|---|---|
humanizer_analyze_ai_tells |
Full analysis with risk score and fix recommendations. |
humanizer_quick_vocab_scan |
Fast word- and phrase-level scan with replacement suggestions. |
humanizer_get_rewrite_instructions |
Step-by-step rewrite plan, tailored to text type (blog / business / academic / email / general). |
humanizer_compare_before_after |
Side-by-side metrics for an original and a rewrite, with a PASS / IMPROVED / NEEDS MORE WORK verdict. |
humanizer_get_banned_words |
The full vocabulary and phrase ban list, for reference. |
Two ways to use it
| Path | Best for | What you do |
|---|---|---|
| Hosted URL (no install) | claude.ai, Claude Desktop, Claude for Chrome — including Free plan | Paste one URL into Settings → Connectors → Add custom connector. |
Local install (uvx / npx) |
Claude Code on the terminal, Desktop with stdio | One command in a shell. |
Path A — add as a Custom Connector (zero install)
Works in claude.ai (web), Claude Desktop, and Claude for Chrome — all four surfaces share the connector list once you're signed in. Available on every plan including Free (Free is limited to one custom connector).
- Open Claude → Settings → Connectors.
- Click Add custom connector.
- Paste the server URL (replace with your hosted instance — see Hosting below):
https://humanizer-mcp.onrender.com/mcp - Save. The five
humanizer_*tools become available in any chat.
That's the whole install for non-technical users — they never touch a terminal.
Path B — install locally (Claude Code / Desktop with stdio)
# Claude Code, one line
claude mcp add humanizer -- uvx humanizer-mcp
For Claude Desktop with a local stdio server, add this to claude_desktop_config.json:
{
"mcpServers": {
"humanizer": {
"command": "uvx",
"args": ["humanizer-mcp"]
}
}
}
Config location:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Other ways to launch the local binary if you don't want uvx:
pip install humanizer-mcp && humanizer-mcp # pip
npx humanizer-mcp # npm launcher (delegates to uvx/pipx/python3)
Try it with the MCP Inspector
npx @modelcontextprotocol/inspector uvx humanizer-mcp
Hosting
To create the URL in Path A, deploy the included Dockerfile. The repo ships with a Render Blueprint and a Fly config:
Render — easiest, free tier, auto-deploys from the GitHub repo:
Fly.io — always-on free tier:
fly launch --copy-config --name humanizer-mcp
fly deploy
Anywhere else — the Dockerfile reads PORT from the environment and binds to 0.0.0.0, so it runs on Railway, Heroku, Cloud Run, ECS, or your own box:
docker build -t humanizer-mcp .
docker run -p 8000:8000 humanizer-mcp
The MCP endpoint is at /mcp (streamable HTTP). The server is stateless and unauthenticated — anyone with the URL can call the tools, but there are no secrets and no destructive operations to abuse.
Run the HTTP server locally
humanizer-mcp --http --port 8000
# point a client at http://127.0.0.1:8000/mcp
Example prompts
With the server connected to Claude, you can say things like:
- "Analyze this blog post for AI tells and tell me what to change."
- "Run a quick vocab scan on this paragraph."
- "Give me rewrite instructions for this academic abstract — keep it formal but fix the burstiness."
- "Compare these two drafts. Did my edit actually lower the detection risk?"
Claude picks the right tool automatically.
How the risk score works
The 0–100 score combines eight signals:
- AI vocabulary hits — words statistically overrepresented in LLM output (
delve,crucial,leverage,myriad, …). - AI phrase hits — cliché structural tells (
it's important to note,in the ever-evolving,at the end of the day, …). - Burstiness — coefficient of variation of sentence lengths. AI writing clusters around a single length; humans mix short fragments and long digressions.
- Contractions — expanded forms (it is, do not) read as AI-formal; contractions read as conversational.
- Paragraph uniformity — AI tends to produce paragraphs of similar size.
- Rhetorical questions — near-absent in AI prose above 200 words.
- First-person voice — AI avoids I, we, my, our unless prompted.
- Em dashes — a ChatGPT signature; heavy use is a strong signal.
Each signal adds to the score independently; the total is clamped to 100 and bucketed into LOW (≤ 20), MEDIUM (21–50), or HIGH (51+).
Development
git clone https://github.com/aousabdo/humanizer-mcp
cd humanizer-mcp
pip install -e ".[dev]"
pytest
See CONTRIBUTING.md for more.
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file humanizer_mcp-0.1.1.tar.gz.
File metadata
- Download URL: humanizer_mcp-0.1.1.tar.gz
- Upload date:
- Size: 16.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
633ea7418ec941f6d5545d70b751b0fcd2214a0b30171469db92fce1924d8c65
|
|
| MD5 |
a50839c149d1ffa5cc7fd436822a3ff4
|
|
| BLAKE2b-256 |
53f6ca54013387e06863ac52be8ced8e5f626df923ee5b2f954e741768235924
|
Provenance
The following attestation bundles were made for humanizer_mcp-0.1.1.tar.gz:
Publisher:
publish.yml on aousabdo/humanizer-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
humanizer_mcp-0.1.1.tar.gz -
Subject digest:
633ea7418ec941f6d5545d70b751b0fcd2214a0b30171469db92fce1924d8c65 - Sigstore transparency entry: 1383551853
- Sigstore integration time:
-
Permalink:
aousabdo/humanizer-mcp@1f3eae7a36d665faacae1d86a7c4e656f8319b96 -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/aousabdo
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1f3eae7a36d665faacae1d86a7c4e656f8319b96 -
Trigger Event:
push
-
Statement type:
File details
Details for the file humanizer_mcp-0.1.1-py3-none-any.whl.
File metadata
- Download URL: humanizer_mcp-0.1.1-py3-none-any.whl
- Upload date:
- Size: 15.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b9bfb2f3355083e04c91e6fb05ea4a43c97e7201f1205b90afce8d9c4b4f0378
|
|
| MD5 |
5f22907b30a5ad73ee5f40f62cad76ee
|
|
| BLAKE2b-256 |
61f8d3b87e0a64d2119330a6e56a87289f376a4b150b8159dc87ca11dc1c1654
|
Provenance
The following attestation bundles were made for humanizer_mcp-0.1.1-py3-none-any.whl:
Publisher:
publish.yml on aousabdo/humanizer-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
humanizer_mcp-0.1.1-py3-none-any.whl -
Subject digest:
b9bfb2f3355083e04c91e6fb05ea4a43c97e7201f1205b90afce8d9c4b4f0378 - Sigstore transparency entry: 1383551869
- Sigstore integration time:
-
Permalink:
aousabdo/humanizer-mcp@1f3eae7a36d665faacae1d86a7c4e656f8319b96 -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/aousabdo
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@1f3eae7a36d665faacae1d86a7c4e656f8319b96 -
Trigger Event:
push
-
Statement type: