Skip to main content

MCP relay server for Claude Desktop <-> Claude Code communication

Project description

Relay

A wonderfully simple tool that moves information—files, code, data, comments, directions, etc.—between Claude Desktop and Claude Code while keeping their main context windows separate.

Both clients connect to a shared buffer via MCP. Just say "ask Desktop" or "send this to Code"—the models handle the rest. Models will also automatically send messages if they determine they need information the other model has. System notifications alert you when something's waiting on the other side.

Why? Desktop and Code have different strengths. Desktop is better for conversation—planning, brainstorming, reviewing, iterating on prose. Code is better for execution—editing files, running commands, working through errors. But they don't share context. If you draft something in Desktop and want Code to implement it, or you want Desktop's opinion on code you're writing, you're copy-pasting between apps.

Relay connects them without requiring you to manually specify questions and cut & paste between windows. This is basic functionality that should have been integrated by Anthropic.

You're always in control. Messages sit in the buffer until you explicitly fetch them. The models can't pull from the relay on their own—you have to type relay or /relay. This means there's no runaway inter-LLM conversation burning through your tokens. You see a notification, you decide whether to check it.

Quick Example

[In Desktop]
You:     Send this error to Code: "TypeError: cannot unpack non-iterable NoneType"
Desktop: [sends via relay]

[notification pops up in Code's terminal]

[In Code]
You:     /relay
Code:    Got it. That's from line 47 in parser.py—the regex isn't matching.
         [fixes the bug]

Extended Example

[In Code]
You:     My classifier's stuck at 72% accuracy. Send the
         training config and last epoch's metrics to Desktop.
Code:    [sends config.yaml and metrics via relay]

[notification pops up in Desktop]

[In Desktop]
You:     relay
Desktop: [fetches config and metrics]
         The loss curve suggests you're underfitting. Learning
         rate 1e-5 is conservative for a model this size...

You:     What would you try?
Desktop: Bump to 3e-5, add warmup steps, and your batch size
         of 8 is small—if memory allows, try 16. Also the
         class imbalance is brutal: 80/15/5 split. Consider
         weighted loss or oversampling the minority classes.

You:     Specify the changes.
Desktop: [sends updated config via relay]

[System notification appears that a message has been sent to Code]

[In Code]
You:     /relay
Code:    Got it. Updating config.yaml and adding class weights
         to the loss function.
         [edits files, starts training run]

         Epoch 12: accuracy up to 79%, but precision on
         class 2 dropped. Want Desktop's take?

You:     Yeah, send the confusion matrix.
Code:    [sends via relay]

[In Desktop]
You:     relay
Desktop: Class 2 is getting confused with class 0—they may be
         semantically close. I need more examples.
         [automatically sends request to Code via relay]

Usage

Type relay in Desktop or /relay in Code to check for messages from the other side. That's the primary interaction.

Sending is usually implicit. When you say "Ask Desktop if this looks right" or "Send the README to Code," the models recognize the intent and call the relay automatically. Models may also send messages on their own if they decide they need input from the other side. Explicit send syntax exists—relay: <message> in Desktop, /relay <message> in Code—but you'll rarely need it.

Notifications

When a message arrives, you'll get a system notification so you know to check the other side. No need to poll manually.

Platform Method Notes
macOS osascript Native notification center
Linux notify-send Requires libnotify
Windows PowerShell toast Native toast notifications

Notification duration and behavior are controlled by your OS settings, not the script.

Setup

Claude Desktop

Add the relay server to your Claude Desktop config:

Platform Config location
macOS ~/Library/Application Support/Claude/claude_desktop_config.json
Linux ~/.config/Claude/claude_desktop_config.json
Windows %APPDATA%\Claude\claude_desktop_config.json

Add this to the mcpServers section:

{
  "mcpServers": {
    "relay": {
      "command": "uvx",
      "args": ["mcp-server-relay", "--client", "desktop"]
    }
  }
}

Then add these memories to Desktop (paste each one):

  1. "Remember: when I say 'relay:' followed by text, send that text to the relay with sender 'desktop' using relay_send."

  2. "Remember: when I say 'relay' by itself, fetch recent messages from the relay."

Claude Code

  1. Install the /relay slash command (one-time setup):
uvx mcp-server-relay --setup-code
  1. Add the MCP server to your project's .mcp.json:
{
  "mcpServers": {
    "relay": {
      "command": "uvx",
      "args": ["mcp-server-relay", "--client", "code"]
    }
  }
}

Note: After adding the MCP server config, restart both Claude Desktop and Claude Code for the relay to connect.

Design Notes

The relay is global. The buffer at ~/.relay_buffer.db is shared across all projects. Claude Desktop has no concept of which project you're working on—it's a general-purpose chat interface—so per-project isolation isn't practical. This is intentional: one user, one machine, one relay.

If you switch projects in Code, the relay comes with you. Old messages from the previous project may still be there; use relay_clear() or /relay clear if you want a fresh start. If you want separate conversations in Desktop for different projects, just start a new chat there.

Large files are slow. For messages a page or two in length, the relay is fast. For large files, it's faster to drag them directly into the interface you want. You can still send accompanying context via relay.

Tools

Tool Description
relay_send(message, sender) Send a message (sender: "desktop" or "code")
relay_fetch(limit, reader, unread_only) Fetch recent messages, optionally mark as read
relay_clear() Delete all messages from the buffer

Technical Details

  • Buffer: SQLite at ~/.relay_buffer.db
  • Rolling window: 20 messages max (oldest evicted first)
  • Message limit: 64 KB per message
  • Idle timeout: 1 hour (server exits automatically when inactive)
  • Transport: stdio (standard MCP)
  • Python: 3.9+

Author

Michael Coen — mhcoen@alum.mit.edu · mhcoen@gmail.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_server_relay-1.0.0.tar.gz (70.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_server_relay-1.0.0-py3-none-any.whl (78.7 kB view details)

Uploaded Python 3

File details

Details for the file mcp_server_relay-1.0.0.tar.gz.

File metadata

  • Download URL: mcp_server_relay-1.0.0.tar.gz
  • Upload date:
  • Size: 70.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for mcp_server_relay-1.0.0.tar.gz
Algorithm Hash digest
SHA256 944c043e79c588c834d72d92e0092a1b46b71203dcec3f3dcd923edfecd79ce9
MD5 8be7400e79fd7d40bd0fea47d406e594
BLAKE2b-256 3c065eeb746eaab0ffd5c2b805b7b2b30d93319bffb5c20addae34b07806df1a

See more details on using hashes here.

File details

Details for the file mcp_server_relay-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mcp_server_relay-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b8b1a367bdf1af171a04ad2b84985bddf5348c13ed0e0fc219ec426e85a57bee
MD5 16d2ff30d81856435fdf2fd1f6a376eb
BLAKE2b-256 fa1c1ac9e74a57a41470ba13f9deeedb831db84f726fe723eee7022897da06eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page