Skip to main content

MCP-SD reference implementation: Selective Disclosure for MCP with a Server-to-Server data plane

Project description

mcp-sd — Reference implementation of MCP-SD

This package is S2SP (Server-to-Server Protocol), the reference implementation of MCP-SD (Selective Disclosure for MCP).

  • MCP-SD is the protocol pattern: an MCP extension that lets an agent select which attributes of a tool result enter the LLM (via an abstract_domains parameter), within a single tool call. The remainder is withheld.
  • S2SP is this reference implementation: MCP-SD plus a dedicated Direct Data Interface (DDI) — an HTTP data plane over which withheld columns flow directly between MCP servers, or through the agent process out-of-band from the LLM.

Overview

When AI agents orchestrate multiple MCP servers, data transfers between servers currently must pass through the agent's context window — wasting tokens, adding latency, and saturating context.

MCP-SD separates the control plane (agent ↔ server via MCP tool calls, carrying only agent-selected columns) from the data plane (the transport by which withheld columns are delivered). The agent sees only the columns it asks for; full data stays on the data plane. S2SP realizes the data plane as a dedicated DDI running over HTTP, either directly server-to-server (async mode) or through the agent process out-of-band (sync mode).

How It Works

MCP-SD treats tool responses as tabular data (like a pandas DataFrame): n rows × w columns. The agent chooses which columns (domains) it needs:

  • Abstract domains (control plane): Columns the agent/LLM reasons over — event type, severity, status, etc.
  • Body domains (data plane): Remaining columns — full descriptions, parameters, raw payloads, etc. These never enter the LLM context.
┌─────────────────────────────────────────────────────────────────┐
│  MCP Tool returns 30 columns × 100 rows                        │
│                                                                 │
│  Agent requests: abstract_domains="event,severity,status"       │
│                                                                 │
│  ┌──────────────────┐     ┌────────────────────────────────┐    │
│  │ Control Plane     │     │ Data Plane                      │    │
│  │ (→ LLM context)  │     │ (cached on server)              │    │
│  │                  │     │                                  │    │
│  │ _row_id, event,  │     │ _row_id, description,           │    │
│  │ severity, status │     │ instruction, parameters, ...    │    │
│  │ (small abstract) │     │ (withheld full columns)         │    │
│  └──────────────────┘     └────────────────────────────────┘    │
│                                                                 │
│  Agent filters → picks row IDs → tells another server to fetch  │
│  from the data plane directly (server-to-server, no LLM)        │
└─────────────────────────────────────────────────────────────────┘

Installation

pip install -e .

Optional extras:

pip install -e ".[dev]"            # tests
pip install -e ".[demos]"          # matplotlib + anthropic + openai for demos
pip install -e ".[claude-agent]"   # agent-side adapter for Claude Agent SDK
pip install -e ".[langgraph]"      # agent-side adapter for LangGraph
pip install -e ".[agents]"         # all agent adapters

Agent-side integrations

Sync mode requires the agent itself to split tool responses so body rows never reach the LLM. This package ships platform-agnostic primitives (DDIBuffer, S2SPDispatcher) plus thin adapters for mainstream agent frameworks under mcp_sd.agent:

from mcp_sd.agent import S2SPDispatcher

# Platform-agnostic: use with any agent loop
dispatcher = S2SPDispatcher()
rewritten = dispatcher.on_tool_result("get_alerts", raw_response)  # hides body
resolved = dispatcher.on_tool_call("draw_chart", args)             # injects body

# Claude Agent SDK
from mcp_sd.agent.adapters.claude_agent_sdk import wrap_tool

# LangGraph
from mcp_sd.agent.adapters.langgraph import make_sd_tool_node

Both adapters share a single S2SPDispatcher instance across all tools in a session. In async mode the dispatcher passes through untouched; in sync mode it stashes body rows in an in-process DDI buffer keyed by opaque ddi://... handles and resolves them when the agent calls a consumer tool. The LLM sees the same short handle regardless of mode, so both flows stay transparent to the model.

Quick Start

S2SP has two types of MCP tools:

Resource tool Consumer tool
Decorator @server.sd_resource_tool() @server.sd_consumer_tool()
You write async def get_data(...) -> list[dict] async def process(rows: list[dict]) -> str
S2SP adds abstract_domains, mode, _row_id, resource_url abstract_data, resource_url, body_data, column_mapping
Data plane Caches body, serves via /s2sp/data/ Fetches + remaps + merges automatically

Resource Tool (@server.sd_resource_tool())

Use this for tools that return tabular data. S2SP automatically adds abstract_domains and mode parameters so the agent can choose which columns it sees.

from mcp_sd import S2SPServer

server = S2SPServer("weather-server")

@server.sd_resource_tool()                          # ← S2SP decorator
async def get_alerts(area: str) -> list[dict]:
    """Get weather alerts — returns ~30 columns per alert."""
    data = await fetch_from_nws(area)
    return [feature["properties"] for feature in data["features"]]

server.run()  # MCP Inspector compatible: mcp dev weather_server.py

Agent Calls the Tool

# Standard mode — all columns returned (traditional MCP)
get_alerts(area="CA")

# S2SP mode — agent chooses which columns it needs
get_alerts(area="CA", abstract_domains="event,severity,urgency,status")
# → Only those columns + _row_id returned on the control plane
# → Full data cached on server behind resource_url
# → Agent filters, passes abstract rows + resource_url to consumer

Consumer Tool (@server.sd_consumer_tool())

The consumer decorator handles all S2SP plumbing — your function just receives merged rows:

from mcp_sd import S2SPServer

server = S2SPServer("stats-server")

@server.sd_consumer_tool()                 # ← S2SP consumer decorator
async def draw_chart(rows: list[dict]) -> str:
    """Draw a chart from merged rows (abstract + body)."""
    return generate_chart(rows)

server.run()

The decorator automatically adds abstract_data, resource_url, body_data, and column_mapping parameters to the MCP tool. It calls DirectChannel.resolve() internally to parse, fetch, remap, and merge — then passes the result to your function.

Column Mapping (optional)

When the consumer uses different column names than the resource server:

draw_chart(
    abstract_data=...,
    resource_url=...,
    column_mapping='{"event": "alert_type", "areaDesc": "location"}'
)
# Resource returns: {"event": "Wind Advisory", "areaDesc": "LA"}
# Consumer sees:    {"alert_type": "Wind Advisory", "location": "LA"}

Running Tests

pytest tests/

Running Demos

Interactive Agent (recommended)

pip install -e ".[demos]"
export ANTHROPIC_API_KEY=sk-ant-...
python demos/weather_agent/agent_async.py   # or agent_sync.py

Then ask:

  • "Show me weather alerts for California"
  • "Filter for wind advisories"
  • "Generate a chart of the severe alerts"

Scripted Demos

# Async mode — body stays on source server, fetched via data plane
python demos/weather_agent/run_async.py [--area CA] [--event Wind]

# Sync mode — body returned inline, no data-plane fetch
python demos/weather_agent/run_sync.py [--area CA] [--event Wind]

Debug with MCP Inspector

pip install -e ".[inspector]"
mcp dev demos/weather_agent/weather_server.py
mcp dev demos/weather_agent/stats_server.py

Protocol Design

See website for full documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_sd-0.1.2.tar.gz (409.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_sd-0.1.2-py3-none-any.whl (21.3 kB view details)

Uploaded Python 3

File details

Details for the file mcp_sd-0.1.2.tar.gz.

File metadata

  • Download URL: mcp_sd-0.1.2.tar.gz
  • Upload date:
  • Size: 409.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for mcp_sd-0.1.2.tar.gz
Algorithm Hash digest
SHA256 a9558ed4568cdef60fbfd28067ecfccf357b6505224c36c51b2f6d2ef9a76e77
MD5 9618b500b7c56088b803318745fb92ca
BLAKE2b-256 83b588a258cc3e2d9d23533651a1a9007896fe4952c65f93ad37579f7e16eb18

See more details on using hashes here.

File details

Details for the file mcp_sd-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: mcp_sd-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 21.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for mcp_sd-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 322805f9089a58d9cda2f036ff7bffe9e477e0c8575e3726a211705eb5133254
MD5 21044f5ed6fa84205dd9c453e3668a72
BLAKE2b-256 46496a6469b4ec7f6e497983785318a62018847ebe1edd82af904391838df176

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page