Charles Proxy MCP server with live capture, structured traffic analysis, and agent-friendly tool contracts
Project description
Charles MCP Server
Docs | Tool Contract | AGENTS | Agent Workflow Guide | Chinese README
Charles MCP Server connects Charles Proxy to MCP clients so an agent can inspect live traffic, analyze saved recordings, and expand individual requests only when needed.
It focuses on three things:
- reading incremental traffic from the current Charles session while recording is still active
- keeping live and history analysis on structured paths instead of exposing raw dump dictionaries first
- using summary-first outputs so the agent can find hotspots before pulling detail
This Release Direction (v3.0)
The v3.0 direction is clear: charles-mcp is moving from pure traffic inspection toward reverse-engineering workflows.
- On top of existing live/history analysis, it now exposes a reverse-analysis tool surface (import, query, decode, replay, signature candidate discovery, and live reverse sessions).
- The goal is to let an agent go beyond traffic browsing and build an end-to-end reverse workflow around auth flows, signatures, parameter mutation, and replayability.
Quick Start
1. Enable the Charles Web Interface
In Charles, open: Proxy -> Web Interface Settings
Make sure:
Enable web interfaceis checked- username is
admin - password is
123456
Menu location:
Settings dialog:
2. Install and configure your MCP client
No cloning, no manual virtualenv. Requires uv.
Claude Code CLI
claude mcp add-json charles '{
"type": "stdio",
"command": "uvx",
"args": ["charles-mcp"],
"env": {
"CHARLES_USER": "admin",
"CHARLES_PASS": "123456",
"CHARLES_MANAGE_LIFECYCLE": "false"
}
}'
Claude Desktop / Cursor / generic JSON config
{
"mcpServers": {
"charles": {
"command": "uvx",
"args": ["charles-mcp"],
"env": {
"CHARLES_USER": "admin",
"CHARLES_PASS": "123456",
"CHARLES_MANAGE_LIFECYCLE": "false"
}
}
}
}
Codex CLI
[mcp_servers.charles]
command = "uvx"
args = ["charles-mcp"]
[mcp_servers.charles.env]
CHARLES_USER = "admin"
CHARLES_PASS = "123456"
CHARLES_MANAGE_LIFECYCLE = "false"
Auto-install via AI agent
Copy-paste the following prompt into any AI agent (Claude Code, ChatGPT, Gemini CLI, Cursor Agent, etc.) and it will install and configure charles-mcp automatically:
🔴 Click to expand auto-install prompt (Recommended)
Install the "charles-mcp" MCP server and configure it for my MCP client. Follow these steps exactly:
Step 1 — Detect OS:
Determine if this machine runs Windows, macOS, or Linux.
Step 2 — Ensure uv is installed:
Run: uv --version
If the command fails (uv not found):
- macOS/Linux: run: curl -LsSf https://astral.sh/uv/install.sh | sh
- Windows: run: powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
After installing, verify uv works: uv --version
Step 3 — Detect which MCP client is installed:
Check in this order and use the FIRST match:
a) Claude Code — run: claude --version
If it succeeds, run this command and skip to Step 5:
claude mcp add-json charles '{"type":"stdio","command":"uvx","args":["charles-mcp"],"env":{"CHARLES_USER":"admin","CHARLES_PASS":"123456","CHARLES_MANAGE_LIFECYCLE":"false"}}'
b) Claude Desktop — check if config file exists:
- macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
- Windows: %APPDATA%\Claude\claude_desktop_config.json
- Linux: ~/.config/Claude/claude_desktop_config.json
c) Cursor — check if any of these exist:
- ~/.cursor/mcp.json
- .cursor/mcp.json (in current project)
d) Windsurf — check if exists:
- ~/.codeium/windsurf/mcp_config.json
If none detected, ask the user which client they use.
Step 4 — Write config (for clients b/c/d):
The config entry to add is:
"charles": {
"command": "uvx",
"args": ["charles-mcp"],
"env": {
"CHARLES_USER": "admin",
"CHARLES_PASS": "123456",
"CHARLES_MANAGE_LIFECYCLE": "false"
}
}
IMPORTANT: If the config file already exists, read it first, parse the JSON,
add the "charles" key inside "mcpServers" (create "mcpServers" if absent),
and write back. Do NOT overwrite other existing MCP server entries.
If the file does not exist, create it with:
{ "mcpServers": { "charles": { ... } } }
Step 5 — Verify:
Run: uvx charles-mcp
Wait 3 seconds then terminate the process.
If it starts without import errors, the installation is successful.
Step 6 — Report:
Print: "charles-mcp installed successfully. Please restart your MCP client to load the new server."
Also mention that Charles Proxy must be running with Web Interface enabled (Proxy → Web Interface Settings, username: admin, password: 123456).
Requirements
- Python 3.10+
- Charles Proxy running locally
- Charles Web Interface enabled
- Charles proxy listening on
127.0.0.1:8888
CHARLES_MANAGE_LIFECYCLE=false is the recommended default. Unless you explicitly want the MCP server to manage Charles lifecycle, do not let it shut down your own Charles process.
Environment Variables
| Variable | Default | Purpose |
|---|---|---|
CHARLES_USER |
admin |
Charles Web Interface username |
CHARLES_PASS |
123456 |
Charles Web Interface password |
CHARLES_PROXY_HOST |
127.0.0.1 |
Charles proxy host |
CHARLES_PROXY_PORT |
8888 |
Charles proxy port |
CHARLES_CONFIG_PATH |
auto-detect | Charles config file path |
CHARLES_REQUEST_TIMEOUT |
10 |
Control-plane HTTP timeout in seconds |
CHARLES_MAX_STOPTIME |
3600 |
Maximum bounded recording length |
CHARLES_MANAGE_LIFECYCLE |
false |
Whether the MCP server should manage Charles startup and shutdown |
CHARLES_REVERSE_STATE_DIR |
${CHARLES_STATE_DIR}/reverse |
State root for reverse-analysis artifacts and SQLite data |
CHARLES_VNEXT_STATE_DIR |
legacy | Legacy reverse-analysis state root. On first startup, charles-mcp migrates it into CHARLES_REVERSE_STATE_DIR automatically |
Recommended Flows
Live analysis
start_live_capturegroup_capture_analysisquery_live_capture_entriesget_traffic_entry_detailstop_live_capture
This path is optimized for finding hotspots first, then drilling down into one confirmed request.
History analysis
list_recordingsanalyze_recorded_trafficgroup_capture_analysis(source="history")get_traffic_entry_detail
This path is optimized for browsing saved recordings and then drilling into selected entries.
Current Version Highlights (v3.0.2)
- The default public tool surface is now tightened to canonical 31 tools; legacy aliases (
filter_func,proxy_by_time,list_sessions) are no longer exposed by default. - Added explicit compatibility toggle support:
create_server(expose_legacy_tools=True)orCHARLES_EXPOSE_LEGACY_TOOLS=true. - Documentation entrypoints now converge on
docs/README.md, withdocs/migrations/legacy-tools.mdas the authoritative legacy migration guide. - Added agent execution docs: root-level
AGENTS.mdand task-orienteddocs/agent-workflows.md. - Added agent-doc entry links in README and
docs/contracts/tools.mdusing repository-relative paths. - Added minimal guidance semantics to high-frequency entry-tool descriptions (identity preservation, summary-first, peek/read behavior) with contract tests to prevent drift.
- The product direction now explicitly includes reverse engineering: a reverse-analysis tool surface is available for import, decode, replay, signature-candidate discovery, and live reverse workflows.
read_live_captureandpeek_live_capturenow return route-level summary fields only, such ashost,method,path, andstatus, instead of raw Charles entries. This keeps repeated polling from blowing up the context window.query_live_capture_entriesis now a read-only analysis path and does not advance the live cursor. You can reuse the samecapture_idwith different filters without consuming the historical increment.analyze_recorded_trafficandquery_live_capture_entriessummaries now exposematched_fieldsandmatch_reasons, so an agent can explain why a request was selected.get_traffic_entry_detailnow defaults toinclude_full_body=falseandmax_body_chars=2048. When the estimated detail payload exceeds about 12,000 characters, the tool adds a warning suggesting a narrower request.- Summary and detail output automatically strip
nullvalues and hide internal fields such asheader_map,parsed_json,parsed_form, andlower_name. Use theheaderslist when you need header values.
Tool Catalog
This README documents the recommended tool surface only. Compatibility-only aliases are intentionally not explained here.
Live capture tools
| Tool | What it does | Typical use |
|---|---|---|
start_live_capture |
Starts or adopts the current live capture and returns capture_id |
Before realtime inspection begins |
read_live_capture |
Reads incremental live entries by cursor and returns compact route summaries only | When consuming new traffic continuously and you only need host/path/status first |
peek_live_capture |
Previews new live entries without advancing the cursor and returns compact route summaries only | When you want to inspect new traffic without moving the reader state |
stop_live_capture |
Stops the capture and optionally persists a snapshot | When closing or exporting a live session |
query_live_capture_entries |
Produces structured summary output for a live capture without advancing the cursor | When repeatedly filtering high-value requests out of current traffic |
Analysis tools
| Tool | What it does | Typical use |
|---|---|---|
group_capture_analysis |
Aggregates live or history traffic by group key | When you want the lowest-token hotspot view |
get_capture_analysis_stats |
Returns coarse traffic class counts | When you want a quick distribution view |
get_traffic_entry_detail |
Loads detail for one specific entry and warns when the payload is too large | After you already identified a target entry_id |
analyze_recorded_traffic |
Produces structured summary output for a saved recording with match reasons | When analyzing a .chlsj snapshot |
History tools
| Tool | What it does | Typical use |
|---|---|---|
list_recordings |
Lists saved recording files | Before choosing a historical snapshot |
get_recording_snapshot |
Loads the raw content of one saved recording | When you need the stored snapshot itself |
query_recorded_traffic |
Applies lightweight filtering to the latest saved recording | When you need a quick host, method, or regex query |
Status and control tools
| Tool | What it does | Typical use |
|---|---|---|
charles_status |
Reports Charles connectivity and active capture state | When checking whether Charles is reachable or capture is still active |
throttling |
Applies a Charles network throttling preset | When simulating 3G, 4G, 5G, or disabling throttling |
reset_environment |
Restores Charles configuration and clears the current environment | When you need to return to a clean baseline |
Reverse analysis tools
| Tool | What it does | Typical use |
|---|---|---|
reverse_import_session |
Imports an official Charles XML or native session into the canonical reverse store | When starting a replay, decode, or signature workflow from saved exports |
reverse_list_captures |
Lists imported reverse-analysis captures | When choosing a capture already stored in the reverse SQLite plane |
reverse_query_entries |
Filters imported reverse entries by route fields | When narrowing the candidate request set before detail or replay |
reverse_get_entry_detail |
Returns canonical detail for one imported reverse entry | When inspecting one baseline request deeply |
reverse_decode_entry_body |
Decodes a stored request or response body, including protobuf with descriptors | When you need structured payload understanding |
reverse_replay_entry |
Replays one imported request with optional mutations | When validating whether a request can be reproduced or perturbed |
reverse_discover_signature_candidates |
Compares multiple imported entries and ranks likely signature-related fields | When searching for dynamic auth or signing parameters |
reverse_list_findings |
Lists persisted replay and signature findings | When reviewing prior reverse-analysis evidence |
reverse_charles_recording_status |
Reports Charles recording state and reverse live-session state | When checking live reverse-analysis readiness |
reverse_start_live_analysis |
Starts a reverse-analysis live session and snapshots Charles via official export pages | When reverse work must track fresh traffic incrementally |
reverse_peek_live_entries |
Reads new reverse live entries without advancing the reverse cursor | When previewing new traffic before consuming it |
reverse_read_live_entries |
Reads and consumes new reverse live entries | When advancing a reverse live-analysis session |
reverse_stop_live_analysis |
Stops a reverse live-analysis session and optionally restores recording | When closing a reverse live session cleanly |
reverse_analyze_live_login_flow |
Scores new live traffic for login/auth relevance and summarizes next actions | When tracing login or token bootstrap flows |
reverse_analyze_live_api_flow |
Scores new live traffic for API workflow relevance and summarizes next actions | When tracing structured business API traffic |
reverse_analyze_live_signature_flow |
Focuses new live traffic on signature-sensitive requests and mutation planning | When targeting signing, nonce, or timestamp defenses |
Key Behavior
1. Raw values are returned by default
This version no longer redacts request or response content:
- summary, detail, live, and history outputs all return raw values
include_sensitiveis retained only for compatibility and no longer changes results
2. Summary comes before detail
Use group_capture_analysis, query_live_capture_entries, or analyze_recorded_traffic first, then call get_traffic_entry_detail only for a confirmed target.
Do not default to include_full_body=true unless there is a clear reason.
3. Output is optimized for token budgets
All summary and detail outputs have been serialized lean:
- Internal fields like
header_map,parsed_json,parsed_form, andlower_nameare excluded from tool output nullvalues are stripped automatically during serialization- When
full_textis present in a detail view, the redundantpreview_textis removed
Default parameters have been lowered to protect the context window:
| Parameter | Old default | New default |
|---|---|---|
max_items |
20 | 10 |
max_preview_chars |
256 | 128 |
max_headers_per_side |
8 | 6 |
max_body_chars |
4096 | 2048 |
Higher values can still be passed explicitly when a wider view is needed.
4. History detail needs stable source identity
History summaries return recording_path. Live summaries return capture_id.
For get_traffic_entry_detail:
- prefer
recording_pathfor history - prefer
capture_idfor live
5. stop_live_capture failures are recoverable
stop_live_capture has two stable end states:
status="stopped"means the capture is actually closedstatus="stop_failed"means a short retry also failed but the capture is still preserved
When the result is:
{
"status": "stop_failed",
"recoverable": true,
"active_capture_preserved": true
}
the capture is still readable and can be diagnosed or stopped again later.
Development
Run tests:
python -m pytest -q
Useful local checks:
python charles-mcp-server.py
python -c "from charles_mcp.main import main; main()"
Acknowledgments
This project was inspired by tianhetonghua/Charles-mcp-server, and that earlier work deserves explicit credit. It proved that connecting Charles Proxy to MCP for AI-driven traffic analysis was a valid and useful direction.
At the same time, that project was not enough for the problem I wanted to solve, which is why charles-mcp is a complete rewrite from scratch rather than a small fork or patch series. The earlier project is oriented more toward reverse engineering and security workflows, with capabilities centered on harvesting, keyword interlocks, encryption detection, and task-scoped cache management. This repository targets a different job: making Charles usable as a stable, low-token, repeatable MCP server for general-purpose AI agents in clients such as Claude Code, Codex, and Cursor.
The rewrite was driven by concrete gaps I needed to solve:
- a unified model for live capture and history analysis, instead of forcing agents to switch between separate harvesting and filtering mental models
- summary-first, detail-on-demand outputs so agents do not immediately consume large raw dumps and blow up the context window
- stable
capture_id,cursor, andrecording_pathsemantics so repeated queries do not accidentally consume or lose live traffic state - stricter tool contracts, recovery behavior, and protocol consistency for the reliability expectations of the AI agent ecosystem
So this repository is not a cosmetic variation on the earlier one. It is a full rebuild for a different operating model: more structured, more predictable, and better suited to agents that need to reason over live and historical Charles traffic without fighting the tool surface.
Support
If this project helps your work, you can support future maintenance and iteration.
WeChat donation QR
USDT-TRC20
TCudxn9ByCxPZHXLtvqBjFmLWXywBoicRs
Changelog
2026-04-15 (v3.0.2)
- Tightened the default public surface to canonical 31 tools; legacy aliases now live in an explicit compatibility layer and are hidden by default.
- Added compatibility toggles:
expose_legacy_toolsandCHARLES_EXPOSE_LEGACY_TOOLS(function argument overrides env). - Added a docs hub (
docs/README.md) and legacy migration guide (docs/migrations/legacy-tools.md) and unified top-level navigation. - Updated
docs/contracts/tools.mdto declare canonical public surface semantics and include a stable parseable JSON section for contract tests.
2026-04-14 (v3.0.1)
- Added and published agent execution guide docs: root-level
AGENTS.mdanddocs/agent-workflows.md. - Added entry links to the agent guides in README and
docs/contracts/tools.mdusing repository-relative paths. - Synced minimal guidance hints for high-frequency entry tools and added documentation/tool-semantic contract tests to keep MCP tool descriptions aligned.
2026-04-14 (v3.0.0)
- Introduced the reverse-analysis tool surface for import, query, decode, replay, signature-candidate discovery, and live reverse-analysis workflows. The product scope now goes beyond traffic browsing into reverse-engineering workflows.
- Upgraded live/history structured-analysis behavior:
query_live_capture_entriesremains read-only, summaries expose clearer match reasons, and detail output stays lightweight by default with explicit large-payload warnings. - Updated documentation to
v3.0semantics: the new-feature sections in both README files now show explicit version labels.
2026-04-13 (v2.0.2)
- Added GitHub Actions release automation for PyPI publishing via Trusted Publisher (OIDC).
- Added release gating with version/tag verification and
twine check --strict. - Improved release visibility and metadata so GitHub Release and PyPI publishing stay aligned.
2026-03-27 (v2.0.1)
- Restricted history snapshot access to managed
.chlsjfiles so the server no longer exposes arbitrary local JSON reads through recording-path inputs. - Fixed live analysis so
scan_limitis actually honored instead of silently stopping at a small fixed scan window. - Fixed
request_body_containsandresponse_body_containsso matching is no longer limited to clipped preview text. - Moved installed-runtime snapshots and backups to a user state directory instead of writing runtime data into the package install tree.
- Published
2.0.1with the fixes above and synced the release across GitHub and PyPI.
See Also
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file charles_mcp-3.0.2.tar.gz.
File metadata
- Download URL: charles_mcp-3.0.2.tar.gz
- Upload date:
- Size: 119.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aee50e50ec7a7d481748a0afbedcb6112a01d1b5e6856432af0611e91ea682ad
|
|
| MD5 |
e4ec19b981deb80ccaa363f0200a5016
|
|
| BLAKE2b-256 |
33b769e12dc80fed3d5b4973cb58873fc4e7fda1118c1f6ad213ca6e22020111
|
Provenance
The following attestation bundles were made for charles_mcp-3.0.2.tar.gz:
Publisher:
publish.yml on heizaheiza/Charles-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
charles_mcp-3.0.2.tar.gz -
Subject digest:
aee50e50ec7a7d481748a0afbedcb6112a01d1b5e6856432af0611e91ea682ad - Sigstore transparency entry: 1296876120
- Sigstore integration time:
-
Permalink:
heizaheiza/Charles-mcp@680f22935acd0f21e9bcc64712868b4dd6ed24d6 -
Branch / Tag:
refs/tags/v3.0.2 - Owner: https://github.com/heizaheiza
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@680f22935acd0f21e9bcc64712868b4dd6ed24d6 -
Trigger Event:
release
-
Statement type:
File details
Details for the file charles_mcp-3.0.2-py3-none-any.whl.
File metadata
- Download URL: charles_mcp-3.0.2-py3-none-any.whl
- Upload date:
- Size: 111.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ab5571a2ad940e3677acc69a2d032078f40181a4b56dc6bf30ff0ed21ccdad78
|
|
| MD5 |
0d7306c90c53cf26e6c38a3e14ed7acb
|
|
| BLAKE2b-256 |
a6db460e23a73d0c09ffcf47de35e9b0398d8baa4bd0ff709ff0664f1e5c80ba
|
Provenance
The following attestation bundles were made for charles_mcp-3.0.2-py3-none-any.whl:
Publisher:
publish.yml on heizaheiza/Charles-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
charles_mcp-3.0.2-py3-none-any.whl -
Subject digest:
ab5571a2ad940e3677acc69a2d032078f40181a4b56dc6bf30ff0ed21ccdad78 - Sigstore transparency entry: 1296876258
- Sigstore integration time:
-
Permalink:
heizaheiza/Charles-mcp@680f22935acd0f21e9bcc64712868b4dd6ed24d6 -
Branch / Tag:
refs/tags/v3.0.2 - Owner: https://github.com/heizaheiza
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@680f22935acd0f21e9bcc64712868b4dd6ed24d6 -
Trigger Event:
release
-
Statement type: