Shared Context MCP Server for multi-agent collaboration
Project description
Shared Context Server
Content Navigation
| Symbol | Meaning | Time Investment |
|---|---|---|
| ๐ | Quick start | 2-5 minutes |
| โ๏ธ | Configuration | 10-15 minutes |
| ๐ง | Deep dive | 30+ minutes |
| ๐ก | Why this works | Context only |
| โ ๏ธ | Important note | Read carefully |
๐ฏ Quick Understanding (30 seconds)
A shared workspace for AI agents to collaborate on complex tasks.
The Problem: AI agents work independently, duplicate research, and can't build on each other's discoveries.
The Solution: Shared sessions where agents see previous findings and build incrementally instead of starting over.
# Agent 1: Security analysis
session.add_message("security_agent", "Found SQL injection in user login")
# Agent 2: Performance review (sees security findings)
session.add_message("perf_agent", "Optimized query while fixing SQL injection")
# Agent 3: Documentation (has full context)
session.add_message("docs_agent", "Documented secure, optimized login implementation")
Each agent builds on previous work instead of starting over.
๐ก Uses MCP Protocol: Model Context Protocol - the standard for AI agent communication (works with Claude Code, Gemini, VS Code, Cursor, and frameworks like CrewAI).
๐ Try It Now (2 minutes)
โ ๏ธ Important: Choose Your Deployment Method
Docker (Recommended for Multi-Client Collaboration):
- โ Shared context across all MCP clients (Claude Code + Cursor + Windsurf)
- โ Persistent service - single server instance on port 23456
- โ True multi-agent collaboration - agents share sessions and memory
- ๐ฏ Use when: You want multiple tools to collaborate on the same tasks
uvx (Quick Trial & Testing Only):
- โ ๏ธ Isolated per-client - each MCP client gets its own separate instance
- โ ๏ธ No shared context - Claude Code and Cursor can't see each other's work
- โ Quick testing - perfect for trying features without Docker setup
- ๐ฏ Use when: Quick feature testing or learning the MCP tools in isolation
# ๐ณ Docker: Multi-client shared collaboration (RECOMMENDED)
docker run -d -p 23456:23456 ghcr.io/leoric-crown/shared-context-server:latest
# ๐ฆ uvx: Isolated single-client testing only
uvx shared-context-server --help
๐ก TL;DR: Use Docker for real multi-agent work, uvx for quick testing only.
Prerequisites Check (30 seconds)
Choose your path:
- โ
Docker (recommended):
docker --versionworks - โ
uvx Trial:
uvx --versionworks (testing only)
Environment Configuration Templates
Choose your .env template (for local development):
# ๐ Quick Start (recommended) - Essential variables only
cp .env.minimal .env
# ๐ง Full Development - All development features
cp .env.example .env
# ๐ณ Docker Deployment - Container-optimized paths
cp .env.docker .env
๐ก Most users want .env.minimal - it contains only the 12 essential variables you actually need.
Step 1: Start Server
Option A: Docker (recommended)
# Quick start with make command (uses GHCR image)
git clone https://github.com/leoric-crown/shared-context-server.git
cd shared-context-server
cp .env.minimal .env
# Edit .env with your secure keys (see Step 2 below)
make docker
# โ ๏ธ This will show live logs - press Ctrl+C to exit log mode and continue
# OR manual Docker run:
API_KEY=$(openssl rand -base64 32)
echo "Your API key: $API_KEY"
docker run -d --name shared-context-server -p 23456:23456 \
-e API_KEY="$API_KEY" \
-e JWT_SECRET_KEY="$(openssl rand -base64 32)" \
-e JWT_ENCRYPTION_KEY="$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')" \
ghcr.io/leoric-crown/shared-context-server:latest
Option B: uvx Trial (Isolated Testing Only)
# ๐ฆ Quick trial - each MCP client gets its own isolated instance
uvx shared-context-server --version # Test installation
# Start server for single-client testing
uvx shared-context-server --transport http --host localhost --port 23456
# Each `uvx shared-context-server` call creates a NEW isolated instance
# โ ๏ธ IMPORTANT: This creates isolated servers per MCP client
# - Claude Code โ gets its own database and sessions
# - Cursor โ gets its own separate database and sessions
# - Windsurf โ gets its own separate database and sessions
# = NO shared context between tools
Option C: Local Development (Clone & Build)
# Clone and setup for development
git clone https://github.com/leoric-crown/shared-context-server.git
cd shared-context-server
uv sync
# Generate and save your API key
API_KEY=$(openssl rand -base64 32)
JWT_SECRET_KEY=$(openssl rand -base64 32)
JWT_ENCRYPTION_KEY=$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
# Start shared HTTP server (like Docker)
API_KEY="$API_KEY" JWT_SECRET_KEY="$JWT_SECRET_KEY" JWT_ENCRYPTION_KEY="$JWT_ENCRYPTION_KEY" \
uv run python -m shared_context_server.scripts.cli --transport http
echo "Your API key: $API_KEY"
Step 2: Create .env File (Optional - for local development)
# Create .env file with your keys
cat > .env << EOF
API_KEY=$API_KEY
JWT_SECRET_KEY=$(openssl rand -base64 32)
JWT_ENCRYPTION_KEY=$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
EOF
# Run with .env file
docker run -d --name shared-context-server -p 23456:23456 \
--env-file .env ghcr.io/leoric-crown/shared-context-server:latest
PyPI Installation (Alternative to Docker)
The shared-context-server is also available on PyPI for quick testing:
# ๐ฆ Install and try (creates isolated instances per client)
uvx shared-context-server --help
uvx shared-context-server --version
# โ ๏ธ For multi-client collaboration, use Docker instead
๐ก When to use PyPI/uvx: Quick feature testing, learning MCP tools, single-client workflows only.
Step 3: Connect Your MCP Client
Replace YOUR_API_KEY_HERE with the key from Step 1:
# Claude Code (simple HTTP transport)
claude mcp add --transport http scs http://localhost:23456/mcp/ \
--header "X-API-Key: YOUR_API_KEY_HERE"
# Gemini CLI
gemini mcp add scs http://localhost:23456/mcp -t http -H "X-API-Key: YOUR_API_KEY_HERE"
# Test connection
claude mcp list # Should show: โ Connected
VS Code Configuration
Add to your existing .vscode/mcp.json (create if it doesn't exist):
{
"servers": {
"shared-context-server": {
"type": "http",
"url": "http://localhost:23456/mcp",
"headers": {"X-API-Key": "YOUR_API_KEY_HERE"}
}
}
}
Cursor Configuration
Add to your existing .cursor/mcp.json (create if it doesn't exist):
{
"mcpServers": {
"shared-context-server": {
"command": "mcp-proxy",
"args": ["--transport=streamablehttp", "http://localhost:23456/mcp/", "--headers", "X-API-Key", "YOUR_API_KEY_HERE"]
}
}
}
Claude Desktop Configuration
Add to your existing claude_desktop_config.json:
On MacOS, you may have to provide explicity path to mcp-proxy.
Have not tested in Windows.
{
"scs": {
"command": "/Users/YOUR_USER/.local/bin/mcp-proxy",
"args": ["--transport=streamablehttp", "http://localhost:23456/mcp/", "--headers", "X-API-Key", "YOUR_API_KEY_HERE"]
}
}
Step 4: Verify & Monitor
๐ Note: If you used make docker-prod, press Ctrl+C to exit the log viewer first, then run these commands in the same terminal.
# Test your setup (30 seconds)
# Method 1: Quick health check
curl http://localhost:23456/health
# Method 2: Create actual test session (see it in web UI!)
# If you have Claude Code with shared-context-server MCP tools:
# Run this in Claude: Create a session with purpose "README test setup"
# Expected: {"success": true, "session_id": "session_...", ...}
# Method 3: Test MCP tools with parameters
npx @modelcontextprotocol/inspector --cli \
-e API_KEY=$API_KEY \
-- uv run python -m shared_context_server.scripts.cli \
--method tools/call \
--tool-name get_usage_guidance
# Expected: {"success": true, "access_level": "READ_ONLY", ...} (proves MCP tools work)
# View the dashboard
open http://localhost:23456/ui/ # Real-time session monitoring
โ Success indicators:
- Health endpoint returns
{"status": "healthy", ...} - Dashboard loads at http://localhost:23456/ui/ and shows active sessions
- MCP Inspector validation error (proves MCP protocol is working)
- MCP client shows
โ Connectedstatus
๐ Web Dashboard (MVP)
Real-time monitoring interface for agent collaboration:
- Live session overview with active agent counts
- Real-time message streaming without page refreshes
- Session isolation visualization to track multi-agent workflows
- Performance monitoring for collaboration efficiency
๐ก Perfect for: Monitoring agent handoffs, debugging collaboration flows, and demonstrating multi-agent coordination to stakeholders.
๐ง Choose Your Path
Are you...
โโโ ๐จโ๐ป Building a side project?
โ โ [Simple Integration](#-simple-integration) (5 minutes)
โ
โโโ ๐ข Planning enterprise deployment?
โ โ [Enterprise Setup](#-enterprise-considerations) (15+ minutes)
โ
โโโ ๐ Researching multi-agent systems?
โ โ [Technical Deep Dive](#-technical-architecture) (30+ minutes)
โ
โโโ ๐ค Just evaluating the concept?
โ [Framework Integration Examples](#-framework-examples) (5 minutes)
๐ Simple Integration
Works with existing tools you already use:
Direct MCP Integration (Tested)
# Via Claude Code or any MCP client
claude mcp add-json shared-context-server '{"command": "mcp-proxy", "args": ["--transport=streamablehttp", "http://localhost:23456/mcp/"]}'
# Direct MCP usage (use proper MCP client in production)
# Example shows concept - use mcp-proxy or MCP client libraries
import asyncio
from mcp_client import MCPClient # Conceptual - use actual MCP client
async def create_session():
client = MCPClient("http://localhost:23456/mcp/")
return await client.call_tool("create_session", {"purpose": "agent collaboration"})
โ ๏ธ Framework Integration Status: Direct MCP protocol tested. CrewAI, AutoGen, and LangChain integrations are conceptual - we welcome community contributions to develop and test these patterns.
โก๏ธ Next: MCP Integration Examples
โ๏ธ Framework Examples
Code Review Pipeline
- Security Agent finds vulnerabilities โ shares findings
- Performance Agent builds on security context โ optimizes safely
- Documentation Agent documents complete solution
๐ก Why this works: Each agent builds on discoveries instead of duplicating work.
Research & Implementation
- Research Agent gathers requirements โ shares insights
- Architecture Agent designs using research โ documents decisions
- Developer Agent implements with full context
More examples: Collaborative Workflows Guide
What works: โ MCP clients (Claude Code, Gemini, VS Code, Cursor) What's conceptual: ๐ Framework patterns (CrewAI, AutoGen, LangChain) - community contributions welcome
๐ง What This Is / What This Isn't
โ What this MCP server provides
- Real-time collaboration substrate for multi-agent workflows
- Session isolation with clean boundaries between different tasks
- MCP protocol compliance that works with any MCP-compatible agent framework
- Infrastructure layer that enhances existing orchestration tools
๐ก Why MCP protocol? Universal compatibility - works with Claude Code, CrewAI, AutoGen, LangChain, and custom frameworks without vendor lock-in.
โ What this MCP server isn't
- Not a vector database - Use Pinecone, Milvus, or Chroma for long-term storage
- Not an orchestration platform - Use CrewAI, AutoGen, or LangChain for task management
- Not for permanent memory - Sessions are for active collaboration, not archival
๐ก Why this approach? We enhance your existing tools rather than replacing them - no need to rewrite your agent workflows.
๐ข Enterprise Considerations
โ๏ธ Production Setup & Scaling
Development โ Production Path
Development (SQLite)
- โ Zero configuration
- โ Perfect for prototyping
- โ Limited to ~5 concurrent agents
Production (PostgreSQL)
- โ High concurrency (20+ agents)
- โ Enterprise backup/recovery
- โ Requires database management
Enterprise Features Roadmap
- SSO Integration: SAML/OIDC support planned
- Audit Logging: Enhanced compliance logging
- High Availability: Multi-node deployment
- Advanced RBAC: Attribute-based permissions
Migration: Start with SQLite, migrate when you hit concurrency limits.
๐ง Security & Compliance
Current Security Features
- JWT Authentication: Role-based access control
- Input Sanitization: XSS and injection prevention
- Secure Token Management: Prevents JWT exposure vulnerabilities
- Message Visibility: Public/private/agent-only filtering
Enterprise Security Roadmap
- SSO Integration: SAML, OIDC, Active Directory
- Audit Trails: SOX, HIPAA-compliant logging
- Data Governance: Retention policies, geographic residency
- Advanced Encryption: At-rest and in-transit encryption
๐ง Technical Architecture
๐ Deployment Architecture: Docker vs uvx
Docker Deployment (Multi-Client Shared Context)
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโ
โ Claude Code โโโโโถโ โ
โโโโโโโโโโโโโโโโโโโค โ Shared HTTP Server โ
โ Cursor โโโโโถโ (port 23456) โ
โโโโโโโโโโโโโโโโโโโค โ โ
โ Windsurf โโโโโถโ โข Single database โ
โโโโโโโโโโโโโโโโโโโ โ โข Shared sessions โ
โ โข Cross-tool memory โ
โโโโโโโโโโโโโโโโโโโโโโโโ
โ Enables: True multi-agent collaboration, session sharing, persistent context
uvx Deployment (Isolated Per-Client)
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Claude Code โโโโโถโ Isolated Server โ
โโโโโโโโโโโโโโโโโโโ โ + Database #1 โ
โโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Cursor โโโโโถโ Isolated Server โ
โโโโโโโโโโโโโโโโโโโ โ + Database #2 โ
โโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Windsurf โโโโโถโ Isolated Server โ
โโโโโโโโโโโโโโโโโโโ โ + Database #3 โ
โโโโโโโโโโโโโโโโโโโ
โ ๏ธ Limitation: No cross-tool collaboration, separate contexts, testing only
๐ก Key Insight: Docker provides the "shared" in shared-context-server, while uvx creates isolated silos.
Core Design Principles
Session-Based Isolation
What: Each collaborative task gets its own workspace Why: Prevents cross-contamination while enabling rich collaboration within teams
Message Visibility Controls
What: Four-tier system (public/private/agent-only/admin-only) Why: Granular information sharing - agents can have private working memory and shared discoveries
MCP Protocol Integration
What: Model Context Protocol compliance for universal compatibility Why: Works with any MCP-compatible framework without custom integration code
Performance Characteristics
Designed for Real-Time Collaboration
- <30ms message operations for smooth agent handoffs
- 2-3ms fuzzy search across session history
- 20+ concurrent agents per session
- Session continuity during agent switches
๐ก Why these targets? Sub-30ms ensures imperceptible delays during agent handoffs, maintaining workflow momentum.
Scalability Considerations
- SQLite: Development and small teams (<5 concurrent agents)
- PostgreSQL: Production deployments (20+ concurrent agents)
- Connection pooling: Built-in performance optimization
- Multi-level caching: >70% cache hit ratio for common operations
Database & Storage
Architecture Decision: Database Choice
SQLite for Development
- โ Zero configuration
- โ Perfect for prototyping
- โ Single writer limitation
PostgreSQL for Production
- โ Multi-writer concurrency
- โ Enterprise backup/recovery
- โ Advanced indexing and performance
- โ Requires database administration
Database Backend
- Unified: SQLAlchemy Core (supports SQLite, PostgreSQL, MySQL)
- Development: SQLite with aiosqlite driver (fastest, simplest)
- Production: PostgreSQL/MySQL with async drivers (scalable, robust)
Migration Path: SQLAlchemy backend provides smooth transition to PostgreSQL when scaling needs arise.
๐ก Why this hybrid approach? Optimizes for developer experience during development while supporting enterprise scale in production.
๐ Documentation & Next Steps
๐ข Getting Started Paths
- Integration Guide - CrewAI, AutoGen, LangChain examples
- Quick Reference - Commands and common tasks
- Development Setup - Local development environment
๐ก Production Deployment
- Docker Setup - Container deployment guide
- API Reference - All 15+ MCP tools with examples
- Troubleshooting - Common issues and solutions
๐ด Advanced Topics
- Custom Integration - Build your own MCP integration
- Production Deployment - Docker and scaling strategies
All documentation: Documentation Index
๐ Development Commands
make help # Show all available commands
make dev # Start development server with hot reload
make test # Run tests with coverage
make quality # Run all quality checks
make docker # Production Docker (GHCR image) โ shows logs
make dev-docker # Development Docker (local build + hot reload) โ shows logs
# โ ๏ธ Both commands show live logs - press Ctrl+C to exit and continue setup
โ๏ธ Direct commands without make
# Development
uv sync && uv run python -m shared_context_server.scripts.dev
# Testing
uv run pytest --cov=src
# Quality checks
uv run ruff check && uv run mypy src/
License
MIT License - Open source software for the AI community.
Built with modern Python tooling and MCP standards. Contributions welcome!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters