Skip to main content

Shared memory layer for AI agents — MCP server and Python library

Project description

Mesh

Shared memory for AI agents.

When one agent learns something, every other agent can recall it — by meaning, not exact keywords.

The problem

Every AI agent starts each session with amnesia. Claude in your terminal doesn't know what Claude in your editor discovered. Your coding agent can't access what your review agent found. Every agent repeats work already done.

The solution

Mesh is a shared semantic memory layer. Agents write memories. Agents read memories. They never need to communicate directly.

from mesh import Mesh

# Agent A learns something
agent_a = Mesh(namespace="my-project", agent_id="coding-agent")
agent_a.learn("The API rate limit is 100 requests/min")

# Agent B recalls it — no direct communication
agent_b = Mesh(namespace="my-project", agent_id="review-agent")
results = agent_b.recall("what are the API constraints?")
# → Returns the memory above with 0.91 similarity

Installation

pip install mesh-memory

MCP Server (Claude Desktop + Cursor)

Add Mesh to your AI tools with one config change.

Claude Desktop — edit ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "mesh": {
      "command": "mesh-server"
    }
  }
}

Cursor — create .cursor/mcp.json in your project:

{
  "mcpServers": {
    "mesh": {
      "command": "mesh-server"
    }
  }
}

Restart the app. Your agents now have access to mesh_learn, mesh_recall, mesh_forget, and mesh_inspect.

Namespaces

Agents sharing a namespace share memory. Agents in different namespaces are isolated.

# Team A's agents
Mesh(namespace="team-alpha")

# Team B's agents — cannot see Team A's memories
Mesh(namespace="team-beta")

# Set namespace for the MCP server via env var
# MESH_NAMESPACE=my-project mesh-server

Memory types

Type When to use
fact Objective information
preference User or system preferences
context Situational background
result Outcomes of tasks
instruction Rules and guidelines

Storage

Memory lives in ~/.mesh/ on your machine. It persists across sessions and reboots. Nothing leaves your machine.

Features

v0.3 — Privacy, Portability, and Namespace Control

Privacy flags Mark any memory as local-only to ensure it never leaves your machine:

mesh.learn(
    "Internal staging server is under heavy load",
    local_only=True
)

Export and import

# Export all non-private memories from a namespace
mesh-export backup.json --namespace shared

# Import into a new machine or namespace
mesh-import backup.json --namespace shared

# Preview without writing
mesh-import backup.json --dry-run

Namespace management

# List all namespaces
mesh-namespaces list

# Get detailed stats
mesh-namespaces stats work

# Delete a namespace
mesh-namespaces delete old-project

# Rename a namespace
mesh-namespaces rename work work-archived

Using Mesh from any tool

Start the HTTP server:

mesh-http

Then from any tool that supports HTTP:

Shell:

# Store a memory
curl -s -X POST http://localhost:7701/learn \
  -H "Content-Type: application/json" \
  -d '{"content": "deploy uses ./scripts/deploy.sh", "memory_type": "process"}'

# Recall memories
curl -s -X POST http://localhost:7701/recall \
  -H "Content-Type: application/json" \
  -d '{"query": "how do we deploy?", "count": 3}'

# Get formatted context for a system prompt
curl -s -X POST http://localhost:7701/context \
  -H "Content-Type: application/json" \
  -d '{"count": 10, "format": "markdown"}'

Python (no SDK needed):

import httpx

# Recall
r = httpx.post("http://localhost:7701/recall", json={"query": "deploy process"})
memories = r.json()["results"]

Node.js:

const res = await fetch("http://localhost:7701/recall", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({ query: "deploy process", count: 5 })
});
const { results } = await res.json();

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mesh_context_layer-0.4.3.tar.gz (75.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mesh_context_layer-0.4.3-py3-none-any.whl (54.8 kB view details)

Uploaded Python 3

File details

Details for the file mesh_context_layer-0.4.3.tar.gz.

File metadata

  • Download URL: mesh_context_layer-0.4.3.tar.gz
  • Upload date:
  • Size: 75.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for mesh_context_layer-0.4.3.tar.gz
Algorithm Hash digest
SHA256 eb4bf22311ceb8e20606dc966ba45489348554638140a2b690138583dc48c05d
MD5 dbab7f9f285b1ced8ae275cf082f14b6
BLAKE2b-256 c524a9ab2f0ea4440c322206421406a70d6e16c2459ba906fbf3d4c7cae9bf9a

See more details on using hashes here.

File details

Details for the file mesh_context_layer-0.4.3-py3-none-any.whl.

File metadata

File hashes

Hashes for mesh_context_layer-0.4.3-py3-none-any.whl
Algorithm Hash digest
SHA256 16a41071db38baa0d892c023b424f5085aaff77d1abea1d0c1a02991c55c2bf4
MD5 40e87044436b5ffbd941319ead004eda
BLAKE2b-256 abe1b870906f67a31de4ba1a3a5ccac783597c29f90ee6700c743f64a340e1d8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page