Skip to main content

Shared Context MCP Server for multi-agent collaboration

Project description

Shared Context Server

CI Docker GHCR codecov Python 3.10+ License: MIT

Content Navigation

Symbol Meaning Time Investment
๐Ÿš€ Quick start 2-5 minutes
โš™๏ธ Configuration 10-15 minutes
๐Ÿ”ง Deep dive 30+ minutes
๐Ÿ’ก Why this works Context only
โš ๏ธ Important note Read carefully

๐ŸŽฏ Quick Understanding (30 seconds)

A shared workspace for AI agents to collaborate on complex tasks.

The Problem: AI agents work independently, duplicate research, and can't build on each other's discoveries.

The Solution: Shared sessions where agents see previous findings and build incrementally instead of starting over.

# Agent 1: Security analysis
session.add_message("security_agent", "Found SQL injection in user login")

# Agent 2: Performance review (sees security findings)
session.add_message("perf_agent", "Optimized query while fixing SQL injection")

# Agent 3: Documentation (has full context)
session.add_message("docs_agent", "Documented secure, optimized login implementation")

Each agent builds on previous work instead of starting over.

๐Ÿ’ก Uses MCP Protocol: Model Context Protocol - the standard for AI agent communication (works with Claude Code, Gemini, VS Code, Cursor, and frameworks like CrewAI).


๐Ÿš€ Try It Now (2 minutes)

Prerequisites Check (30 seconds)

Choose your path:

  • โœ… Docker (recommended): docker --version works
  • โœ… CLI (no Docker): python --version shows 3.10+ and uv --version works

Environment Configuration Templates

Choose your .env template (for local development):

# ๐Ÿš€ Quick Start (recommended) - Essential variables only
cp .env.minimal .env

# ๐Ÿ”ง Full Development - All development features
cp .env.example .env

# ๐Ÿณ Docker Deployment - Container-optimized paths
cp .env.docker .env

๐Ÿ’ก Most users want .env.minimal - it contains only the 12 essential variables you actually need.

Step 1: Start Server

Option A: Docker (recommended)

# Quick start with make command (uses GHCR image)
git clone https://github.com/leoric-crown/shared-context-server.git
cd shared-context-server
cp .env.minimal .env
# Edit .env with your secure keys (see Step 2 below)
make docker-prod

# OR manual Docker run:
API_KEY=$(openssl rand -base64 32)
echo "Your API key: $API_KEY"
docker run -d --name shared-context-server -p 23456:23456 \
  -e API_KEY="$API_KEY" \
  -e JWT_SECRET_KEY="$(openssl rand -base64 32)" \
  -e JWT_ENCRYPTION_KEY="$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')" \
  ghcr.io/leoric-crown/shared-context-server:latest

Option B: CLI (no Docker)

# Clone and setup
git clone https://github.com/leoric-crown/shared-context-server.git
cd shared-context-server
uv sync

# Generate and save your API key
API_KEY=$(openssl rand -base64 32)
JWT_SECRET_KEY=$(openssl rand -base64 32)
JWT_ENCRYPTION_KEY=$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')

# Start server
API_KEY="$API_KEY" JWT_SECRET_KEY="$JWT_SECRET_KEY" JWT_ENCRYPTION_KEY="$JWT_ENCRYPTION_KEY" \
  uv run python -m shared_context_server.scripts.cli
echo "Your API key: $API_KEY"

Step 2: Create .env File (Optional - for local development)

# Create .env file with your keys
cat > .env << EOF
API_KEY=$API_KEY
JWT_SECRET_KEY=$(openssl rand -base64 32)
JWT_ENCRYPTION_KEY=$(python -c 'from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())')
EOF

# Run with .env file
docker run -d --name shared-context-server -p 23456:23456 \
  --env-file .env ghcr.io/leoric-crown/shared-context-server:latest

Step 3: Connect Your MCP Client

Replace YOUR_API_KEY_HERE with the key from Step 1:

# Claude Code (simple HTTP transport)
claude mcp add --transport http scs http://localhost:23456/mcp/ \
  --header "X-API-Key: YOUR_API_KEY_HERE"

# Gemini CLI
gemini mcp add scs http://localhost:23456/mcp -t http -H "X-API-Key: YOUR_API_KEY_HERE"

# Test connection
claude mcp list  # Should show: โœ“ Connected

VS Code Configuration

Add to your existing .vscode/mcp.json (create if it doesn't exist):

{
  "servers": {
    "shared-context-server": {
      "type": "http",
      "url": "http://localhost:23456/mcp",
      "headers": {"X-API-Key": "YOUR_API_KEY_HERE"}
    }
  }
}

Cursor Configuration

Add to your existing .cursor/mcp.json (create if it doesn't exist):

{
  "mcpServers": {
    "shared-context-server": {
      "command": "mcp-proxy",
      "args": ["--transport=http", "http://localhost:23456/mcp/", "--header", "X-API-Key: YOUR_API_KEY_HERE"]
    }
  }
}

Step 4: Verify & Monitor

# Test your setup (30 seconds)
curl -X POST http://localhost:23456/mcp/tool/create_session \
  -H "X-API-Key: $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"purpose": "test setup"}'

# Expected output: {"success": true, "session_id": "sess_..."}
# View the dashboard
open http://localhost:23456/ui/  # Real-time session monitoring

โœ… Success indicators:

  • curl command returns {"success": true, "session_id": "..."}
  • Dashboard shows "1 active session"
  • MCP client shows โœ“ Connected status

๐Ÿ“Š Web Dashboard (MVP)

Real-time monitoring interface for agent collaboration:

  • Live session overview with active agent counts
  • Real-time message streaming without page refreshes
  • Session isolation visualization to track multi-agent workflows
  • Performance monitoring for collaboration efficiency

๐Ÿ’ก Perfect for: Monitoring agent handoffs, debugging collaboration flows, and demonstrating multi-agent coordination to stakeholders.


๐Ÿ”ง Choose Your Path

Are you...

โ”œโ”€โ”€ ๐Ÿ‘จโ€๐Ÿ’ป Building a side project?
โ”‚   โ†’ [Simple Integration](#-simple-integration) (5 minutes)
โ”‚
โ”œโ”€โ”€ ๐Ÿข Planning enterprise deployment?
โ”‚   โ†’ [Enterprise Setup](#-enterprise-considerations) (15+ minutes)
โ”‚
โ”œโ”€โ”€ ๐ŸŽ“ Researching multi-agent systems?
โ”‚   โ†’ [Technical Deep Dive](#-technical-architecture) (30+ minutes)
โ”‚
โ””โ”€โ”€ ๐Ÿค” Just evaluating the concept?
    โ†’ [Framework Integration Examples](#-framework-examples) (5 minutes)

๐Ÿš€ Simple Integration

Works with existing tools you already use:

Direct MCP Integration (Tested)

# Via Claude Code or any MCP client
claude mcp add-json shared-context-server '{"command": "mcp-proxy", "args": ["--transport=streamablehttp", "http://localhost:23456/mcp/"]}'

# Direct API usage
import httpx
client = httpx.AsyncClient()
session = await client.post("http://localhost:23456/mcp/tool/create_session",
                           json={"purpose": "agent collaboration"})

โš ๏ธ Framework Integration Status: Direct MCP protocol tested. CrewAI, AutoGen, and LangChain integrations are conceptual - we welcome community contributions to develop and test these patterns.

โžก๏ธ Next: MCP Integration Examples


โš™๏ธ Framework Examples

Code Review Pipeline

  1. Security Agent finds vulnerabilities โ†’ shares findings
  2. Performance Agent builds on security context โ†’ optimizes safely
  3. Documentation Agent documents complete solution

๐Ÿ’ก Why this works: Each agent builds on discoveries instead of duplicating work.

Research & Implementation

  1. Research Agent gathers requirements โ†’ shares insights
  2. Architecture Agent designs using research โ†’ documents decisions
  3. Developer Agent implements with full context

More examples: Collaborative Workflows Guide

What works: โœ… MCP clients (Claude Code, Gemini, VS Code, Cursor) What's conceptual: ๐Ÿ”„ Framework patterns (CrewAI, AutoGen, LangChain) - community contributions welcome


๐Ÿ”ง What This Is / What This Isn't

โœ… What this MCP server provides

  • Real-time collaboration substrate for multi-agent workflows
  • Session isolation with clean boundaries between different tasks
  • MCP protocol compliance that works with any MCP-compatible agent framework
  • Infrastructure layer that enhances existing orchestration tools

๐Ÿ’ก Why MCP protocol? Universal compatibility - works with Claude Code, CrewAI, AutoGen, LangChain, and custom frameworks without vendor lock-in.

โŒ What this MCP server isn't

  • Not a vector database - Use Pinecone, Milvus, or Chroma for long-term storage
  • Not an orchestration platform - Use CrewAI, AutoGen, or LangChain for task management
  • Not for permanent memory - Sessions are for active collaboration, not archival

๐Ÿ’ก Why this approach? We enhance your existing tools rather than replacing them - no need to rewrite your agent workflows.


๐Ÿข Enterprise Considerations

โš™๏ธ Production Setup & Scaling

Development โ†’ Production Path

Development (SQLite)

  • โœ… Zero configuration
  • โœ… Perfect for prototyping
  • โŒ Limited to ~5 concurrent agents

Production (PostgreSQL)

  • โœ… High concurrency (20+ agents)
  • โœ… Enterprise backup/recovery
  • โŒ Requires database management

Enterprise Features Roadmap

  • SSO Integration: SAML/OIDC support planned
  • Audit Logging: Enhanced compliance logging
  • High Availability: Multi-node deployment
  • Advanced RBAC: Attribute-based permissions

Migration: Start with SQLite, migrate when you hit concurrency limits.

๐Ÿ”ง Security & Compliance

Current Security Features

  • JWT Authentication: Role-based access control
  • Input Sanitization: XSS and injection prevention
  • Secure Token Management: Prevents JWT exposure vulnerabilities
  • Message Visibility: Public/private/agent-only filtering

Enterprise Security Roadmap

  • SSO Integration: SAML, OIDC, Active Directory
  • Audit Trails: SOX, HIPAA-compliant logging
  • Data Governance: Retention policies, geographic residency
  • Advanced Encryption: At-rest and in-transit encryption

๐Ÿ”ง Technical Architecture

Core Design Principles

Session-Based Isolation

What: Each collaborative task gets its own workspace Why: Prevents cross-contamination while enabling rich collaboration within teams

Message Visibility Controls

What: Four-tier system (public/private/agent-only/admin-only) Why: Granular information sharing - agents can have private working memory and shared discoveries

MCP Protocol Integration

What: Model Context Protocol compliance for universal compatibility Why: Works with any MCP-compatible framework without custom integration code

Performance Characteristics

Designed for Real-Time Collaboration

  • <30ms message operations for smooth agent handoffs
  • 2-3ms fuzzy search across session history
  • 20+ concurrent agents per session
  • Session continuity during agent switches

๐Ÿ’ก Why these targets? Sub-30ms ensures imperceptible delays during agent handoffs, maintaining workflow momentum.

Scalability Considerations

  • SQLite: Development and small teams (<5 concurrent agents)
  • PostgreSQL: Production deployments (20+ concurrent agents)
  • Connection pooling: Built-in performance optimization
  • Multi-level caching: >70% cache hit ratio for common operations
Database & Storage

Architecture Decision: Database Choice

SQLite for Development

  • โœ… Zero configuration
  • โœ… Perfect for prototyping
  • โŒ Single writer limitation

PostgreSQL for Production

  • โœ… Multi-writer concurrency
  • โœ… Enterprise backup/recovery
  • โœ… Advanced indexing and performance
  • โŒ Requires database administration

Database Backend

  • Unified: SQLAlchemy Core (supports SQLite, PostgreSQL, MySQL)
  • Development: SQLite with aiosqlite driver (fastest, simplest)
  • Production: PostgreSQL/MySQL with async drivers (scalable, robust)

Migration Path: SQLAlchemy backend provides smooth transition to PostgreSQL when scaling needs arise.

๐Ÿ’ก Why this hybrid approach? Optimizes for developer experience during development while supporting enterprise scale in production.


๐Ÿ“– Documentation & Next Steps

๐ŸŸข Getting Started Paths

๐ŸŸก Production Deployment

๐Ÿ”ด Advanced Topics

All documentation: Documentation Index


๐Ÿš€ Development Commands

make help        # Show all available commands
make dev         # Start development server with hot reload
make test        # Run tests with coverage
make quality     # Run all quality checks
make docker-prod # Production Docker (GHCR image)
make docker      # Development Docker (local build + hot reload)
โš™๏ธ Direct commands without make
# Development
uv sync && uv run python -m shared_context_server.scripts.dev

# Testing
uv run pytest --cov=src

# Quality checks
uv run ruff check && uv run mypy src/

License

MIT License - Open source software for the AI community.


Built with modern Python tooling and MCP standards. Contributions welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

shared_context_server-1.1.6.tar.gz (355.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

shared_context_server-1.1.6-py3-none-any.whl (177.4 kB view details)

Uploaded Python 3

File details

Details for the file shared_context_server-1.1.6.tar.gz.

File metadata

File hashes

Hashes for shared_context_server-1.1.6.tar.gz
Algorithm Hash digest
SHA256 66f1023dc765e5f6847089a9f26b56b925020d35f0dde3ea54d3ca9ace1481c6
MD5 70b6a3ef0533737c898e76544f42aee5
BLAKE2b-256 da6b303f6f39d4e82aa5b7674daaeb231de3fc0f7b22e7cab1956a279683221f

See more details on using hashes here.

File details

Details for the file shared_context_server-1.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for shared_context_server-1.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 b22e7fdca5260c889498001039aea8305a832a96280d0adcffb1ede63f35d014
MD5 5b07e8544de5b7a98de08d1f0a672449
BLAKE2b-256 c2e0db3cdc8947ae6bcca4d65c59a36478f1125ababd822bbb0f2fbea879fa93

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page