Shared memory layer for AI agents — MCP server and Python library
Project description
Mesh
Shared memory for AI agents.
When one agent learns something, every other agent can recall it — by meaning, not exact keywords.
The problem
Every AI agent starts each session with amnesia. Claude in your terminal doesn't know what Claude in your editor discovered. Your coding agent can't access what your review agent found. Every agent repeats work already done.
The solution
Mesh is a shared semantic memory layer. Agents write memories. Agents read memories. They never need to communicate directly.
from mesh import Mesh
# Agent A learns something
agent_a = Mesh(namespace="my-project", agent_id="coding-agent")
agent_a.learn("The API rate limit is 100 requests/min")
# Agent B recalls it — no direct communication
agent_b = Mesh(namespace="my-project", agent_id="review-agent")
results = agent_b.recall("what are the API constraints?")
# → Returns the memory above with 0.91 similarity
Installation
pip install mesh-memory
MCP Server (Claude Desktop + Cursor)
Add Mesh to your AI tools with one config change.
Claude Desktop — edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"mesh": {
"command": "mesh-server"
}
}
}
Cursor — create .cursor/mcp.json in your project:
{
"mcpServers": {
"mesh": {
"command": "mesh-server"
}
}
}
Restart the app. Your agents now have access to mesh_learn, mesh_recall, mesh_forget, and mesh_inspect.
Namespaces
Agents sharing a namespace share memory. Agents in different namespaces are isolated.
# Team A's agents
Mesh(namespace="team-alpha")
# Team B's agents — cannot see Team A's memories
Mesh(namespace="team-beta")
# Set namespace for the MCP server via env var
# MESH_NAMESPACE=my-project mesh-server
Memory types
| Type | When to use |
|---|---|
fact |
Objective information |
preference |
User or system preferences |
context |
Situational background |
result |
Outcomes of tasks |
instruction |
Rules and guidelines |
Storage
Memory lives in ~/.mesh/ on your machine. It persists across sessions and reboots. Nothing leaves your machine.
Features
v0.3 — Privacy, Portability, and Namespace Control
Privacy flags Mark any memory as local-only to ensure it never leaves your machine:
mesh.learn(
"Internal staging server is under heavy load",
local_only=True
)
Export and import
# Export all non-private memories from a namespace
mesh-export backup.json --namespace shared
# Import into a new machine or namespace
mesh-import backup.json --namespace shared
# Preview without writing
mesh-import backup.json --dry-run
Namespace management
# List all namespaces
mesh-namespaces list
# Get detailed stats
mesh-namespaces stats work
# Delete a namespace
mesh-namespaces delete old-project
# Rename a namespace
mesh-namespaces rename work work-archived
Using Mesh from any tool
Start the HTTP server:
mesh-http
Then from any tool that supports HTTP:
Shell:
# Store a memory
curl -s -X POST http://localhost:7701/learn \
-H "Content-Type: application/json" \
-d '{"content": "deploy uses ./scripts/deploy.sh", "memory_type": "process"}'
# Recall memories
curl -s -X POST http://localhost:7701/recall \
-H "Content-Type: application/json" \
-d '{"query": "how do we deploy?", "count": 3}'
# Get formatted context for a system prompt
curl -s -X POST http://localhost:7701/context \
-H "Content-Type: application/json" \
-d '{"count": 10, "format": "markdown"}'
Python (no SDK needed):
import httpx
# Recall
r = httpx.post("http://localhost:7701/recall", json={"query": "deploy process"})
memories = r.json()["results"]
Node.js:
const res = await fetch("http://localhost:7701/recall", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query: "deploy process", count: 5 })
});
const { results } = await res.json();
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mesh_context_layer-0.4.0.tar.gz.
File metadata
- Download URL: mesh_context_layer-0.4.0.tar.gz
- Upload date:
- Size: 78.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ee8a1d96a790ca1ff02d82b2ea6397ecb4eb356f29dc9cc6df654987eb4da4da
|
|
| MD5 |
a80b08bf7f0ede5dbea95f52761f0182
|
|
| BLAKE2b-256 |
6ad3c36e234b97b88a5a0fcf943aca8ad63dbe884cd45b6948a7dd2e24c7927a
|
File details
Details for the file mesh_context_layer-0.4.0-py3-none-any.whl.
File metadata
- Download URL: mesh_context_layer-0.4.0-py3-none-any.whl
- Upload date:
- Size: 53.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
738d0e1a258875b6e267123d82ba33bf733e3c8cf86a24e6370000edfd1a458b
|
|
| MD5 |
1617d96a4eb409dadd06ea193ec95ee6
|
|
| BLAKE2b-256 |
501b40c6d8374aa223ab278d7728d0d7242f6b88132c3da926f2660d31cfc1c9
|