Shared Context MCP Server for multi-agent collaboration
Project description
Shared Context Server
Content Navigation
| Symbol | Meaning | Time Investment |
|---|---|---|
| ๐ | Quick start | 2-5 minutes |
| โ๏ธ | Configuration | 10-15 minutes |
| ๐ง | Deep dive | 30+ minutes |
| ๐ก | Why this works | Context only |
| โ ๏ธ | Important note | Read carefully |
๐ฏ Quick Understanding (30 seconds)
A shared workspace for AI agents to collaborate on complex tasks.
The Problem: AI agents work independently, duplicate research, and can't build on each other's discoveries.
The Solution: Shared sessions where agents see previous findings and build incrementally instead of starting over.
# Agent 1: Security analysis
session.add_message("security_agent", "Found SQL injection in user login")
# Agent 2: Performance review (sees security findings)
session.add_message("perf_agent", "Optimized query while fixing SQL injection")
# Agent 3: Documentation (has full context)
session.add_message("docs_agent", "Documented secure, optimized login implementation")
Each agent builds on previous work instead of starting over.
๐ก Uses MCP Protocol: Model Context Protocol - the standard for AI agent communication (works with Claude Code, Gemini, VS Code, Cursor, and frameworks like CrewAI).
๐ Try It Now (2 minutes)
Prerequisites Check (30 seconds)
Choose your path:
- โ
Docker (recommended):
docker --versionworks - โ
CLI (no Docker):
python --versionshows 3.10+ anduv --versionworks
Environment Configuration Templates
Choose your .env template (for local development):
# ๐ Quick Start (recommended) - Essential variables only
cp .env.minimal .env
# ๐ง Full Development - All development features
cp .env.example .env
# ๐ณ Docker Deployment - Container-optimized paths
cp .env.docker .env
๐ก Most users want .env.minimal - it contains only the 12 essential variables you actually need.
Step 1: Start Server
Option A: Docker (recommended)
# Quick start with make command (uses GHCR image)
git clone https://github.com/leoric-crown/shared-context-server.git
cd shared-context-server
cp .env.minimal .env
# Edit .env with your secure keys (see Step 2 below)
make docker-prod
# OR manual Docker run:
API_KEY=$(openssl rand -base64 32)
echo "Your API key: $API_KEY"
docker run -d --name shared-context-server -p 23456:23456 \
-e API_KEY="$API_KEY" \
-e JWT_SECRET_KEY="$(openssl rand -base64 32)" \
-e JWT_ENCRYPTION_KEY="$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')" \
ghcr.io/leoric-crown/shared-context-server:latest
Option B: CLI (no Docker)
# Clone and setup
git clone https://github.com/leoric-crown/shared-context-server.git
cd shared-context-server
uv sync
# Generate and save your API key
API_KEY=$(openssl rand -base64 32)
JWT_SECRET_KEY=$(openssl rand -base64 32)
JWT_ENCRYPTION_KEY=$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
# Start server
API_KEY="$API_KEY" JWT_SECRET_KEY="$JWT_SECRET_KEY" JWT_ENCRYPTION_KEY="$JWT_ENCRYPTION_KEY" \
uv run python -m shared_context_server.scripts.cli
echo "Your API key: $API_KEY"
Step 2: Create .env File (Optional - for local development)
# Create .env file with your keys
cat > .env << EOF
API_KEY=$API_KEY
JWT_SECRET_KEY=$(openssl rand -base64 32)
JWT_ENCRYPTION_KEY=$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
EOF
# Run with .env file
docker run -d --name shared-context-server -p 23456:23456 \
--env-file .env ghcr.io/leoric-crown/shared-context-server:latest
Step 3: Connect Your MCP Client
Replace YOUR_API_KEY_HERE with the key from Step 1:
# Claude Code (simple HTTP transport)
claude mcp add --transport http scs http://localhost:23456/mcp/ \
--header "X-API-Key: YOUR_API_KEY_HERE"
# Gemini CLI
gemini mcp add scs http://localhost:23456/mcp -t http -H "X-API-Key: YOUR_API_KEY_HERE"
# Test connection
claude mcp list # Should show: โ Connected
VS Code Configuration
Add to your existing .vscode/mcp.json (create if it doesn't exist):
{
"servers": {
"shared-context-server": {
"type": "http",
"url": "http://localhost:23456/mcp",
"headers": {"X-API-Key": "YOUR_API_KEY_HERE"}
}
}
}
Cursor Configuration
Add to your existing .cursor/mcp.json (create if it doesn't exist):
{
"mcpServers": {
"shared-context-server": {
"command": "mcp-proxy",
"args": ["--transport=http", "http://localhost:23456/mcp/", "--header", "X-API-Key: YOUR_API_KEY_HERE"]
}
}
}
Step 4: Verify & Monitor
# Test your setup (30 seconds)
curl -X POST http://localhost:23456/mcp/tool/create_session \
-H "X-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{"purpose": "test setup"}'
# Expected output: {"success": true, "session_id": "sess_..."}
# View the dashboard
open http://localhost:23456/ui/ # Real-time session monitoring
โ Success indicators:
- curl command returns
{"success": true, "session_id": "..."} - Dashboard shows "1 active session"
- MCP client shows
โ Connectedstatus
๐ Web Dashboard (MVP)
Real-time monitoring interface for agent collaboration:
- Live session overview with active agent counts
- Real-time message streaming without page refreshes
- Session isolation visualization to track multi-agent workflows
- Performance monitoring for collaboration efficiency
๐ก Perfect for: Monitoring agent handoffs, debugging collaboration flows, and demonstrating multi-agent coordination to stakeholders.
๐ง Choose Your Path
Are you...
โโโ ๐จโ๐ป Building a side project?
โ โ [Simple Integration](#-simple-integration) (5 minutes)
โ
โโโ ๐ข Planning enterprise deployment?
โ โ [Enterprise Setup](#-enterprise-considerations) (15+ minutes)
โ
โโโ ๐ Researching multi-agent systems?
โ โ [Technical Deep Dive](#-technical-architecture) (30+ minutes)
โ
โโโ ๐ค Just evaluating the concept?
โ [Framework Integration Examples](#-framework-examples) (5 minutes)
๐ Simple Integration
Works with existing tools you already use:
Direct MCP Integration (Tested)
# Via Claude Code or any MCP client
claude mcp add-json shared-context-server '{"command": "mcp-proxy", "args": ["--transport=streamablehttp", "http://localhost:23456/mcp/"]}'
# Direct API usage
import httpx
client = httpx.AsyncClient()
session = await client.post("http://localhost:23456/mcp/tool/create_session",
json={"purpose": "agent collaboration"})
โ ๏ธ Framework Integration Status: Direct MCP protocol tested. CrewAI, AutoGen, and LangChain integrations are conceptual - we welcome community contributions to develop and test these patterns.
โก๏ธ Next: MCP Integration Examples
โ๏ธ Framework Examples
Code Review Pipeline
- Security Agent finds vulnerabilities โ shares findings
- Performance Agent builds on security context โ optimizes safely
- Documentation Agent documents complete solution
๐ก Why this works: Each agent builds on discoveries instead of duplicating work.
Research & Implementation
- Research Agent gathers requirements โ shares insights
- Architecture Agent designs using research โ documents decisions
- Developer Agent implements with full context
More examples: Collaborative Workflows Guide
What works: โ MCP clients (Claude Code, Gemini, VS Code, Cursor) What's conceptual: ๐ Framework patterns (CrewAI, AutoGen, LangChain) - community contributions welcome
๐ง What This Is / What This Isn't
โ What this MCP server provides
- Real-time collaboration substrate for multi-agent workflows
- Session isolation with clean boundaries between different tasks
- MCP protocol compliance that works with any MCP-compatible agent framework
- Infrastructure layer that enhances existing orchestration tools
๐ก Why MCP protocol? Universal compatibility - works with Claude Code, CrewAI, AutoGen, LangChain, and custom frameworks without vendor lock-in.
โ What this MCP server isn't
- Not a vector database - Use Pinecone, Milvus, or Chroma for long-term storage
- Not an orchestration platform - Use CrewAI, AutoGen, or LangChain for task management
- Not for permanent memory - Sessions are for active collaboration, not archival
๐ก Why this approach? We enhance your existing tools rather than replacing them - no need to rewrite your agent workflows.
๐ข Enterprise Considerations
โ๏ธ Production Setup & Scaling
Development โ Production Path
Development (SQLite)
- โ Zero configuration
- โ Perfect for prototyping
- โ Limited to ~5 concurrent agents
Production (PostgreSQL)
- โ High concurrency (20+ agents)
- โ Enterprise backup/recovery
- โ Requires database management
Enterprise Features Roadmap
- SSO Integration: SAML/OIDC support planned
- Audit Logging: Enhanced compliance logging
- High Availability: Multi-node deployment
- Advanced RBAC: Attribute-based permissions
Migration: Start with SQLite, migrate when you hit concurrency limits.
๐ง Security & Compliance
Current Security Features
- JWT Authentication: Role-based access control
- Input Sanitization: XSS and injection prevention
- Secure Token Management: Prevents JWT exposure vulnerabilities
- Message Visibility: Public/private/agent-only filtering
Enterprise Security Roadmap
- SSO Integration: SAML, OIDC, Active Directory
- Audit Trails: SOX, HIPAA-compliant logging
- Data Governance: Retention policies, geographic residency
- Advanced Encryption: At-rest and in-transit encryption
๐ง Technical Architecture
Core Design Principles
Session-Based Isolation
What: Each collaborative task gets its own workspace Why: Prevents cross-contamination while enabling rich collaboration within teams
Message Visibility Controls
What: Four-tier system (public/private/agent-only/admin-only) Why: Granular information sharing - agents can have private working memory and shared discoveries
MCP Protocol Integration
What: Model Context Protocol compliance for universal compatibility Why: Works with any MCP-compatible framework without custom integration code
Performance Characteristics
Designed for Real-Time Collaboration
- <30ms message operations for smooth agent handoffs
- 2-3ms fuzzy search across session history
- 20+ concurrent agents per session
- Session continuity during agent switches
๐ก Why these targets? Sub-30ms ensures imperceptible delays during agent handoffs, maintaining workflow momentum.
Scalability Considerations
- SQLite: Development and small teams (<5 concurrent agents)
- PostgreSQL: Production deployments (20+ concurrent agents)
- Connection pooling: Built-in performance optimization
- Multi-level caching: >70% cache hit ratio for common operations
Database & Storage
Architecture Decision: Database Choice
SQLite for Development
- โ Zero configuration
- โ Perfect for prototyping
- โ Single writer limitation
PostgreSQL for Production
- โ Multi-writer concurrency
- โ Enterprise backup/recovery
- โ Advanced indexing and performance
- โ Requires database administration
Database Backend
- Unified: SQLAlchemy Core (supports SQLite, PostgreSQL, MySQL)
- Development: SQLite with aiosqlite driver (fastest, simplest)
- Production: PostgreSQL/MySQL with async drivers (scalable, robust)
Migration Path: SQLAlchemy backend provides smooth transition to PostgreSQL when scaling needs arise.
๐ก Why this hybrid approach? Optimizes for developer experience during development while supporting enterprise scale in production.
๐ Documentation & Next Steps
๐ข Getting Started Paths
- Integration Guide - CrewAI, AutoGen, LangChain examples
- Quick Reference - Commands and common tasks
- Development Setup - Local development environment
๐ก Production Deployment
- Docker Setup - Container deployment guide
- API Reference - All 15+ MCP tools with examples
- Troubleshooting - Common issues and solutions
๐ด Advanced Topics
- Custom Integration - Build your own MCP integration
- Production Deployment - Docker and scaling strategies
All documentation: Documentation Index
๐ Development Commands
make help # Show all available commands
make dev # Start development server with hot reload
make test # Run tests with coverage
make quality # Run all quality checks
make docker-prod # Production Docker (GHCR image)
make docker # Development Docker (local build + hot reload)
โ๏ธ Direct commands without make
# Development
uv sync && uv run python -m shared_context_server.scripts.dev
# Testing
uv run pytest --cov=src
# Quality checks
uv run ruff check && uv run mypy src/
License
MIT License - Open source software for the AI community.
Built with modern Python tooling and MCP standards. Contributions welcome!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters