Skip to main content

MCP server for analyzing and humanizing AI-generated text to bypass AI detection.

Project description

humanizer-mcp

PyPI version npm version Python versions License: MIT CI

An MCP (Model Context Protocol) server that measures AI-detection risk in a piece of text and tells you — line by line — what to change. Works with Claude Code, Claude Desktop, and any MCP-compatible client.

Just want to use it? Go to humanizer.analyticadss.com — copy the URL, paste it into Claude's Connectors, done. The rest of this README is for developers and people self-hosting.

Rather than running your prose through a black-box "humanizer," this server analyzes it against known detection signals (vocabulary, burstiness, contraction usage, paragraph uniformity, em dashes, first-person voice) and returns a structured report with a 0–100 risk score and a concrete rewrite plan. The actual rewriting is left to the LLM that's driving the conversation — which is the point: a planner, not a laundering service.

Tools

Tool What it does
humanizer_humanize_text Returns the rewritten text. Applies vocabulary swaps, phrase removal, contractions, em-dash cleanup, plus before/after scores. The LLM caller polishes for context.
humanizer_analyze_ai_tells Full analysis with risk score, fix recommendations, and a mechanical rewrite as a starting point.
humanizer_quick_vocab_scan Fast word- and phrase-level scan with replacement suggestions.
humanizer_get_rewrite_instructions Step-by-step rewrite plan, tailored to text type (blog / business / academic / email / general).
humanizer_compare_before_after Side-by-side metrics for an original and a rewrite, with a PASS / IMPROVED / NEEDS MORE WORK verdict.
humanizer_get_banned_words The full vocabulary and phrase ban list, for reference.

Three ways to use it

Path Best for What you do
Hosted URL (no install, deterministic) claude.ai, Claude Desktop, Claude for Chrome — including Free plan Paste one URL into Settings → Connectors → Add custom connector.
Skill (no install, no infra, estimated) Same surfaces, plus people who don't want to use up their 1 free-tier connector slot Upload the skill/humanizer-mcp/ folder under Settings → Capabilities → Skills. See skill/README.md.
Local install (uvx / npx) Claude Code on the terminal, Desktop with stdio One command in a shell.

Sharing with non-technical users? Two PDFs in share/ — email either one, no further explanation needed:

Path A — add as a Custom Connector (zero install)

Works in claude.ai (web), Claude Desktop, and Claude for Chrome — all four surfaces share the connector list once you're signed in. Available on every plan including Free (Free is limited to one custom connector).

A hosted reference instance is up — feel free to use it for casual evaluation:

https://humanizer-api.analyticadss.com/mcp

For production / privacy-sensitive use, deploy your own with the included Dockerfile (see Hosting below — Fly.io takes ~3 minutes). The hosted instance is on a free Fly tier with no SLA, no support, and no privacy guarantees — your text passes through it.

To add it to your Claude:

  1. Open Claude → SettingsConnectors.
  2. Click Add custom connector.
  3. Paste the URL above (or your own hosted instance's /mcp URL).
  4. Save. The five humanizer_* tools become available in any chat.

That's the whole install for non-technical users — they never touch a terminal.

Path B — install locally (Claude Code / Desktop with stdio)

# Claude Code, one line
claude mcp add humanizer -- uvx humanizer-mcp

For Claude Desktop with a local stdio server, add this to claude_desktop_config.json:

{
  "mcpServers": {
    "humanizer": {
      "command": "uvx",
      "args": ["humanizer-mcp"]
    }
  }
}

Config location:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

Other ways to launch the local binary if you don't want uvx:

pip install humanizer-mcp && humanizer-mcp     # pip
npx humanizer-mcp                              # npm launcher (delegates to uvx/pipx/python3)

Try it with the MCP Inspector

npx @modelcontextprotocol/inspector uvx humanizer-mcp

Hosting

To create the URL in Path A, deploy the included Dockerfile. The repo ships with a Render Blueprint and a Fly config:

Render — easiest, free tier, auto-deploys from the GitHub repo:

Deploy to Render

Fly.io — always-on free tier:

fly launch --copy-config --name humanizer-mcp
fly deploy

Anywhere else — the Dockerfile reads PORT from the environment and binds to 0.0.0.0, so it runs on Railway, Heroku, Cloud Run, ECS, or your own box:

docker build -t humanizer-mcp .
docker run -p 8000:8000 humanizer-mcp

Cloudflare Tunnel from your laptop — zero hosting cost, only up while your machine is on:

brew install cloudflared
pipx install humanizer-mcp        # or: pip install humanizer-mcp
humanizer-mcp --http --port 8000 &
cloudflared tunnel --url http://localhost:8000
# copy the trycloudflare.com URL it prints

The MCP endpoint is at /mcp (streamable HTTP). The server is stateless and unauthenticated — anyone with the URL can call the tools, but there are no secrets and no destructive operations to abuse.

Run the HTTP server locally

humanizer-mcp --http --port 8000
# point a client at http://127.0.0.1:8000/mcp

Verify it works

Once installed by any path, in any Claude chat ask:

"What humanizer tools do you have available?"

Claude should list five: humanizer_analyze_ai_tells, humanizer_quick_vocab_scan, humanizer_get_rewrite_instructions, humanizer_compare_before_after, humanizer_get_banned_words.

Then try the canonical test:

"Score this for AI tells: 'In today's rapidly evolving digital landscape, it's important to note that businesses must leverage cutting-edge solutions to navigate the multifaceted challenges they face.'"

You should get a score in the HIGH bucket (≥ 60), the signals that fired, and a line-by-line fix list.

Troubleshooting

Symptom Likely cause Fix
claude mcp list shows server but no tools uvx isn't on $PATH for the Claude Code subprocess which uvx; add ~/.local/bin to $PATH in your shell rc
humanizer-mcp: command not found after pip install pip user-install bin not on $PATH Use python3 -m humanizer_mcp instead — always works
Claude Desktop has no hammer icon Config JSON syntax error python3 -m json.tool < claude_desktop_config.json to validate
npx humanizer-mcp hangs ~30s on first run Launcher is shelling to uvx, which downloads deps on first use Wait it out; subsequent runs are instant
Render-hosted: 406 spam in logs Health Check Path is /mcp; should be /health Settings → Health & Alerts → set to /health
Render-hosted: pip install killed during build OOM on free tier (512MB) — pydantic-core native compile spikes Switch to Fly free tier or upgrade Render to Starter
Custom Connector add fails on claude.ai Free Already at the 1-connector limit Remove an unused connector; or upgrade plan

Example prompts

With the server connected to Claude, you can say things like:

  • "Analyze this blog post for AI tells and tell me what to change."
  • "Run a quick vocab scan on this paragraph."
  • "Give me rewrite instructions for this academic abstract — keep it formal but fix the burstiness."
  • "Compare these two drafts. Did my edit actually lower the detection risk?"

Claude picks the right tool automatically.

How the risk score works

The 0–100 score combines eight signals:

  1. AI vocabulary hits — words statistically overrepresented in LLM output (delve, crucial, leverage, myriad, …).
  2. AI phrase hits — cliché structural tells (it's important to note, in the ever-evolving, at the end of the day, …).
  3. Burstiness — coefficient of variation of sentence lengths. AI writing clusters around a single length; humans mix short fragments and long digressions.
  4. Contractions — expanded forms (it is, do not) read as AI-formal; contractions read as conversational.
  5. Paragraph uniformity — AI tends to produce paragraphs of similar size.
  6. Rhetorical questions — near-absent in AI prose above 200 words.
  7. First-person voice — AI avoids I, we, my, our unless prompted.
  8. Em dashes — a ChatGPT signature; heavy use is a strong signal.

Each signal adds to the score independently; the total is clamped to 100 and bucketed into LOW (≤ 20), MEDIUM (21–50), or HIGH (51+).

Development

git clone https://github.com/aousabdo/humanizer-mcp
cd humanizer-mcp
pip install -e ".[dev]"
pytest

See CONTRIBUTING.md for more.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

humanizer_mcp-0.2.0.tar.gz (23.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

humanizer_mcp-0.2.0-py3-none-any.whl (21.7 kB view details)

Uploaded Python 3

File details

Details for the file humanizer_mcp-0.2.0.tar.gz.

File metadata

  • Download URL: humanizer_mcp-0.2.0.tar.gz
  • Upload date:
  • Size: 23.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for humanizer_mcp-0.2.0.tar.gz
Algorithm Hash digest
SHA256 ed73b3ae06324b882a327707b6a985caf5fa40e8813bf52791256e5d046ffcdd
MD5 a03c275de843d5ab51969d06ab758ad9
BLAKE2b-256 d07c6cf314ca5f27cdf4b50a860dead80538e909a146a29e5d1c8e1dada6eefc

See more details on using hashes here.

Provenance

The following attestation bundles were made for humanizer_mcp-0.2.0.tar.gz:

Publisher: publish.yml on aousabdo/humanizer-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file humanizer_mcp-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: humanizer_mcp-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 21.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for humanizer_mcp-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cad9f456854c9b89de7446aec7ee7ae47363388d81a37a37394bc93e1470ec04
MD5 aa38e0b5c7ff96166e626fd4e0995afd
BLAKE2b-256 f5d0e5878a4e74899c0276f41228e077b6e9f7ee353b60f990bda56c235312ca

See more details on using hashes here.

Provenance

The following attestation bundles were made for humanizer_mcp-0.2.0-py3-none-any.whl:

Publisher: publish.yml on aousabdo/humanizer-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page