Skip to main content

The SQLite of AI Memory - an embeddable, zero-dependency AI memory library for any LLM application

Project description

MemoryMesh - The SQLite of AI Memory

PyPI version License: MIT Python Versions CI

Give any LLM persistent memory in 3 lines of Python. Zero dependencies. Fully local.


The Problem

AI tools start every session with amnesia. Your preferences, decisions, past mistakes -- all gone. You repeat yourself. The AI re-discovers things you already told it. Context windows reset, and weeks of accumulated knowledge vanish.

MemoryMesh fixes this. Install once, and your AI remembers everything -- across sessions, across tools, across projects.


Why MemoryMesh?

Solution Approach Trade-off
Mem0 SaaS / managed service Requires cloud account, data leaves your machine, ongoing costs
Letta / MemGPT Full agent framework Heavy framework lock-in, complex setup, opinionated architecture
Zep Memory server Requires PostgreSQL, Docker, server infrastructure
MemoryMesh Embeddable library Zero dependencies. Just SQLite. Works anywhere.

Like SQLite revolutionized embedded databases, MemoryMesh brings the same philosophy to AI memory: simple, reliable, embeddable. No infrastructure. No lock-in. No surprises.


MCP Quick Start

MemoryMesh works as an MCP server with any compatible AI tool. Install it once, then add the config to your tool of choice:

pip install memorymesh

Claude Code (~/.claude/settings.json):

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Cursor (.cursor/mcp.json):

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Gemini CLI (~/.gemini/settings.json):

{
  "mcpServers": {
    "memorymesh": {
      "command": "memorymesh-mcp"
    }
  }
}

Your AI now has persistent memory across sessions. Preferences, decisions, and patterns survive context window resets.


Python Quick Start

from memorymesh import MemoryMesh

memory = MemoryMesh()
memory.remember("User prefers Python and dark mode")
results = memory.recall("What does the user prefer?")

That is it. Three lines to give your AI application persistent, semantic memory.

# Works with any LLM -- inject recalled context into your prompts
context = memory.recall("What do I know about this user?")

# Claude
response = claude_client.messages.create(
    model="claude-sonnet-4-20250514",
    system=f"User context: {context}",
    messages=[{"role": "user", "content": "Help me design an API"}],
)

# GPT
response = openai_client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": f"User context: {context}"},
        {"role": "user", "content": "Help me design an API"},
    ],
)

# Or Ollama, Gemini, Mistral, Llama, or literally anything else

How It Works

  1. Store -- After each interaction, remember() the key facts, decisions, and patterns (not the full conversation).
  2. Recall -- At the start of the next session, recall() retrieves only the most relevant memories ranked by semantic similarity, recency, and importance.
  3. Persist -- Memories live in SQLite on your machine. They survive session restarts, tool switches, and context window resets.

The real value

  • Cross-session persistence -- Decisions made Monday are still known Friday.
  • Cross-tool memory -- What you teach Claude stays available in Gemini, Codex, and Cursor.
  • Structured recall -- Categories, importance scoring, time decay, and semantic search instead of brute-force history replay.
  • Privacy -- Everything local. No cloud, no telemetry, no data leaves your machine.

Installation

# Base installation (no external dependencies, uses built-in keyword matching)
pip install memorymesh

# With local embeddings (sentence-transformers, runs entirely on your machine)
pip install "memorymesh[local]"

# With Ollama embeddings (connect to a local Ollama instance)
pip install "memorymesh[ollama]"

# With OpenAI embeddings
pip install "memorymesh[openai]"

# Everything
pip install "memorymesh[all]"

Features

  • Simple API -- remember(), recall(), forget(). That is the core interface. No boilerplate, no configuration ceremony.
  • SQLite-Based -- All memory stored in SQLite files. No database servers, no infrastructure. Automatic schema migrations.
  • Framework-Agnostic -- Works with any LLM, any framework, any architecture. Use it with LangChain, LlamaIndex, raw API calls, or your own setup.
  • Pluggable Embeddings -- Choose from local models, Ollama, OpenAI, or plain keyword matching with zero dependencies.
  • MCP Support -- Built-in MCP server for seamless integration with Claude Code, Cursor, Gemini CLI, and other MCP-compatible tools.
  • Memory Categories -- Automatic categorization with scope routing. Preferences go global; decisions stay in the project. MemoryMesh decides where memories belong.
  • Encrypted Storage -- Optionally encrypt memory text and metadata at rest with zero external dependencies.
  • Privacy-First -- All data stays on your machine. No telemetry, no cloud calls, no data collection. You own your data.
  • Auto-Compaction -- Transparent deduplication that runs automatically during normal use. Like SQLite's auto-vacuum, you never need to think about it.
  • Cross-Platform -- Runs on Linux, macOS, and Windows. Anywhere Python runs, MemoryMesh runs.

What's New in v4

  • Smart Sync -- Export the top-N most relevant memories to .md files, ranked by importance and recency. No more full dumps -- only what matters.
  • Configurable Relevance Weights -- Tune recency, importance, and similarity weights via environment variables or constructor parameters.
  • EncryptedStore Completeness -- EncryptedMemoryStore now supports search_filtered and update_fields, matching the full MemoryStore interface.
  • Security Hardening -- SQL injection fix in search_filtered (strict allowlist for metadata keys) and explicit file permissions on database files.

Roadmap

v4.0 -- Invisible Memory is the current release. The goal: make MemoryMesh truly invisible. AI should not need to "use" it -- it should just work. Smart Sync, auto-remember hooks, leaner MCP tools, and task-aware context injection.

v5.0 -- Adaptive Memory is next. Heuristic-based question frequency tracking, behavioral pattern detection, and multi-device sync via Syncthing/rsync. Lightweight and local -- no LLM-based anticipation, no cloud sync.

See the full roadmap for details, strategic context, and completed milestones.


Documentation

Full documentation: sparkvibe-io.github.io/memorymesh

Guide Description
Configuration Embedding providers, Ollama setup, all constructor options
MCP Server Setup for Claude Code, Cursor, Windsurf + teaching your AI to use memory
Multi-Tool Sync Sync memories across Claude, Codex, and Gemini CLI
CLI Reference Terminal commands for inspecting and managing memories
API Reference Full Python API with all methods and parameters
Architecture System design, dual-store pattern, and schema migrations
FAQ Common questions answered
Benchmarks Performance numbers and how to run benchmarks

Contributing

We welcome contributions from everyone. See CONTRIBUTING.md for guidelines on how to get started.


License

MIT License. See LICENSE for the full text.


Free. Forever. For Everyone.

MemoryMesh is part of the SparkVibe open-source AI initiative. We believe that foundational AI tools should be free, open, and accessible to everyone -- not locked behind paywalls, cloud subscriptions, or proprietary platforms.

Our mission is to reduce the cost and complexity of building AI applications, so that developers everywhere -- whether at a startup, a research lab, a nonprofit, or learning on their own -- can build intelligent systems without barriers.

If AI is going to shape the future, the tools that power it should belong to all of us.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memorymesh-4.0.0.tar.gz (231.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memorymesh-4.0.0-py3-none-any.whl (113.3 kB view details)

Uploaded Python 3

File details

Details for the file memorymesh-4.0.0.tar.gz.

File metadata

  • Download URL: memorymesh-4.0.0.tar.gz
  • Upload date:
  • Size: 231.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for memorymesh-4.0.0.tar.gz
Algorithm Hash digest
SHA256 bfc4a9067baa2dcf4e74f8804e138d00369e757210e4ede21a9d4a042e1ee9e2
MD5 d24a2400c639200c037b5cdd3520684c
BLAKE2b-256 5fcda1e60ffb1b7cb27b0fcb6058619287cb51b57d8eb856c211bf0a55fd7815

See more details on using hashes here.

File details

Details for the file memorymesh-4.0.0-py3-none-any.whl.

File metadata

  • Download URL: memorymesh-4.0.0-py3-none-any.whl
  • Upload date:
  • Size: 113.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for memorymesh-4.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3131e999798501a018db93bb777ba529070b4e248f4d6b949efae2a6b0a7c717
MD5 a14def77bf73ba08a57beff79ee53671
BLAKE2b-256 159f058673ee449cdabaa6e62efcae464a8088c74d3c3d3b99747b3ec205fb75

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page