Skip to main content

Shared memory layer for AI agents — MCP server and Python library

Project description

Mesh

Shared memory for AI agents.

When one agent learns something, every other agent can recall it — by meaning, not exact keywords.

The problem

Every AI agent starts each session with amnesia. Claude in your terminal doesn't know what Claude in your editor discovered. Your coding agent can't access what your review agent found. Every agent repeats work already done.

The solution

Mesh is a shared semantic memory layer. Agents write memories. Agents read memories. They never need to communicate directly.

from mesh import Mesh

# Agent A learns something
agent_a = Mesh(namespace="my-project", agent_id="coding-agent")
agent_a.learn("The API rate limit is 100 requests/min")

# Agent B recalls it — no direct communication
agent_b = Mesh(namespace="my-project", agent_id="review-agent")
results = agent_b.recall("what are the API constraints?")
# → Returns the memory above with 0.91 similarity

Installation

pip install mesh-context-layer

MCP Server (Claude Desktop + Cursor)

Add Mesh to your AI tools with one config change.

Claude Desktop — edit ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "mesh": {
      "command": "mesh-server"
    }
  }
}

Cursor — create .cursor/mcp.json in your project:

{
  "mcpServers": {
    "mesh": {
      "command": "mesh-server"
    }
  }
}

Restart the app. Your agents now have access to mesh_learn, mesh_recall, mesh_forget, and mesh_inspect.

Namespaces

Agents sharing a namespace share memory. Agents in different namespaces are isolated.

# Team A's agents
Mesh(namespace="team-alpha")

# Team B's agents — cannot see Team A's memories
Mesh(namespace="team-beta")

# Set namespace for the MCP server via env var
# MESH_NAMESPACE=my-project mesh-server

Memory types

Type When to use
fact Objective information
preference User or system preferences
context Situational background
result Outcomes of tasks
instruction Rules and guidelines

Storage

Memory lives in ~/.mesh/ on your machine. It persists across sessions and reboots. Nothing leaves your machine.

Features

v0.3 — Privacy, Portability, and Namespace Control

Privacy flags Mark any memory as local-only to ensure it never leaves your machine:

mesh.learn(
    "Internal staging server is under heavy load",
    local_only=True
)

Export and import

# Export all non-private memories from a namespace
mesh-export backup.json --namespace shared

# Import into a new machine or namespace
mesh-import backup.json --namespace shared

# Preview without writing
mesh-import backup.json --dry-run

Namespace management

# List all namespaces
mesh-namespaces list

# Get detailed stats
mesh-namespaces stats work

# Delete a namespace
mesh-namespaces delete old-project

# Rename a namespace
mesh-namespaces rename work work-archived

Using Mesh from any tool

Start the HTTP server:

mesh-http

Then from any tool that supports HTTP:

Shell:

# Store a memory
curl -s -X POST http://localhost:7701/learn \
  -H "Content-Type: application/json" \
  -d '{"content": "deploy uses ./scripts/deploy.sh", "memory_type": "process"}'

# Recall memories
curl -s -X POST http://localhost:7701/recall \
  -H "Content-Type: application/json" \
  -d '{"query": "how do we deploy?", "count": 3}'

# Get formatted context for a system prompt
curl -s -X POST http://localhost:7701/context \
  -H "Content-Type: application/json" \
  -d '{"count": 10, "format": "markdown"}'

Python (no SDK needed):

import httpx

# Recall
r = httpx.post("http://localhost:7701/recall", json={"query": "deploy process"})
memories = r.json()["results"]

Node.js:

const res = await fetch("http://localhost:7701/recall", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ query: "deploy process", count: 5 })
});
const { results } = await res.json();

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mesh_context_layer-0.4.4.tar.gz (75.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mesh_context_layer-0.4.4-py3-none-any.whl (54.8 kB view details)

Uploaded Python 3

File details

Details for the file mesh_context_layer-0.4.4.tar.gz.

File metadata

  • Download URL: mesh_context_layer-0.4.4.tar.gz
  • Upload date:
  • Size: 75.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for mesh_context_layer-0.4.4.tar.gz
Algorithm Hash digest
SHA256 0f185ab65670bd70025d23fc885a269b979de517c3682c4b15cc44133a3e7539
MD5 c70d2556cb69842596a98abc9197ddb2
BLAKE2b-256 e37e32e69ad7a08aa161a062d6ada74a5a5dd0daea0ada4d683f0d74ffead210

See more details on using hashes here.

File details

Details for the file mesh_context_layer-0.4.4-py3-none-any.whl.

File metadata

File hashes

Hashes for mesh_context_layer-0.4.4-py3-none-any.whl
Algorithm Hash digest
SHA256 b7aea43412faa6d7d5e20c6d0638805bcff6385856aeae21ef29c49d6495d733
MD5 c6fbfe4a95a44c9b76cd875df231b959
BLAKE2b-256 88de8bae544d4fd76d22fb260f2c847ad418f71939dfd6ec0c839989f9ada71e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page