Skip to main content

MCP server for analyzing and humanizing AI-generated text to bypass AI detection.

Project description

humanizer-mcp

PyPI version npm version Python versions License: MIT CI

An MCP (Model Context Protocol) server that measures AI-detection risk in a piece of text and tells you — line by line — what to change. Works with Claude Code, Claude Desktop, and any MCP-compatible client.

Rather than running your prose through a black-box "humanizer," this server analyzes it against known detection signals (vocabulary, burstiness, contraction usage, paragraph uniformity, em dashes, first-person voice) and returns a structured report with a 0–100 risk score and a concrete rewrite plan. The actual rewriting is left to the LLM that's driving the conversation — which is the point: a planner, not a laundering service.

Tools

Tool What it does
humanizer_analyze_ai_tells Full analysis with risk score and fix recommendations.
humanizer_quick_vocab_scan Fast word- and phrase-level scan with replacement suggestions.
humanizer_get_rewrite_instructions Step-by-step rewrite plan, tailored to text type (blog / business / academic / email / general).
humanizer_compare_before_after Side-by-side metrics for an original and a rewrite, with a PASS / IMPROVED / NEEDS MORE WORK verdict.
humanizer_get_banned_words The full vocabulary and phrase ban list, for reference.

Two ways to use it

Path Best for What you do
Hosted URL (no install) claude.ai, Claude Desktop, Claude for Chrome — including Free plan Paste one URL into Settings → Connectors → Add custom connector.
Local install (uvx / npx) Claude Code on the terminal, Desktop with stdio One command in a shell.

Path A — add as a Custom Connector (zero install)

Works in claude.ai (web), Claude Desktop, and Claude for Chrome — all four surfaces share the connector list once you're signed in. Available on every plan including Free (Free is limited to one custom connector).

  1. Open Claude → SettingsConnectors.
  2. Click Add custom connector.
  3. Paste the server URL (replace with your hosted instance — see Hosting below):
    https://humanizer-mcp.onrender.com/mcp
    
  4. Save. The five humanizer_* tools become available in any chat.

That's the whole install for non-technical users — they never touch a terminal.

Path B — install locally (Claude Code / Desktop with stdio)

# Claude Code, one line
claude mcp add humanizer -- uvx humanizer-mcp

For Claude Desktop with a local stdio server, add this to claude_desktop_config.json:

{
  "mcpServers": {
    "humanizer": {
      "command": "uvx",
      "args": ["humanizer-mcp"]
    }
  }
}

Config location:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

Other ways to launch the local binary if you don't want uvx:

pip install humanizer-mcp && humanizer-mcp     # pip
npx humanizer-mcp                              # npm launcher (delegates to uvx/pipx/python3)

Try it with the MCP Inspector

npx @modelcontextprotocol/inspector uvx humanizer-mcp

Hosting

To create the URL in Path A, deploy the included Dockerfile. The repo ships with a Render Blueprint and a Fly config:

Render — easiest, free tier, auto-deploys from the GitHub repo:

Deploy to Render

Fly.io — always-on free tier:

fly launch --copy-config --name humanizer-mcp
fly deploy

Anywhere else — the Dockerfile reads PORT from the environment and binds to 0.0.0.0, so it runs on Railway, Heroku, Cloud Run, ECS, or your own box:

docker build -t humanizer-mcp .
docker run -p 8000:8000 humanizer-mcp

The MCP endpoint is at /mcp (streamable HTTP). The server is stateless and unauthenticated — anyone with the URL can call the tools, but there are no secrets and no destructive operations to abuse.

Run the HTTP server locally

humanizer-mcp --http --port 8000
# point a client at http://127.0.0.1:8000/mcp

Example prompts

With the server connected to Claude, you can say things like:

  • "Analyze this blog post for AI tells and tell me what to change."
  • "Run a quick vocab scan on this paragraph."
  • "Give me rewrite instructions for this academic abstract — keep it formal but fix the burstiness."
  • "Compare these two drafts. Did my edit actually lower the detection risk?"

Claude picks the right tool automatically.

How the risk score works

The 0–100 score combines eight signals:

  1. AI vocabulary hits — words statistically overrepresented in LLM output (delve, crucial, leverage, myriad, …).
  2. AI phrase hits — cliché structural tells (it's important to note, in the ever-evolving, at the end of the day, …).
  3. Burstiness — coefficient of variation of sentence lengths. AI writing clusters around a single length; humans mix short fragments and long digressions.
  4. Contractions — expanded forms (it is, do not) read as AI-formal; contractions read as conversational.
  5. Paragraph uniformity — AI tends to produce paragraphs of similar size.
  6. Rhetorical questions — near-absent in AI prose above 200 words.
  7. First-person voice — AI avoids I, we, my, our unless prompted.
  8. Em dashes — a ChatGPT signature; heavy use is a strong signal.

Each signal adds to the score independently; the total is clamped to 100 and bucketed into LOW (≤ 20), MEDIUM (21–50), or HIGH (51+).

Development

git clone https://github.com/aousabdo/humanizer-mcp
cd humanizer-mcp
pip install -e ".[dev]"
pytest

See CONTRIBUTING.md for more.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

humanizer_mcp-0.1.1.tar.gz (16.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

humanizer_mcp-0.1.1-py3-none-any.whl (15.0 kB view details)

Uploaded Python 3

File details

Details for the file humanizer_mcp-0.1.1.tar.gz.

File metadata

  • Download URL: humanizer_mcp-0.1.1.tar.gz
  • Upload date:
  • Size: 16.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for humanizer_mcp-0.1.1.tar.gz
Algorithm Hash digest
SHA256 633ea7418ec941f6d5545d70b751b0fcd2214a0b30171469db92fce1924d8c65
MD5 a50839c149d1ffa5cc7fd436822a3ff4
BLAKE2b-256 53f6ca54013387e06863ac52be8ced8e5f626df923ee5b2f954e741768235924

See more details on using hashes here.

Provenance

The following attestation bundles were made for humanizer_mcp-0.1.1.tar.gz:

Publisher: publish.yml on aousabdo/humanizer-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file humanizer_mcp-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: humanizer_mcp-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 15.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for humanizer_mcp-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b9bfb2f3355083e04c91e6fb05ea4a43c97e7201f1205b90afce8d9c4b4f0378
MD5 5f22907b30a5ad73ee5f40f62cad76ee
BLAKE2b-256 61f8d3b87e0a64d2119330a6e56a87289f376a4b150b8159dc87ca11dc1c1654

See more details on using hashes here.

Provenance

The following attestation bundles were made for humanizer_mcp-0.1.1-py3-none-any.whl:

Publisher: publish.yml on aousabdo/humanizer-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page