Skip to main content

High-performance semantic memory system integrating LangMem with ProllyTree

Project description

Memoir

Memoir Logo

Git for AI Memory

Making AI memory as reliable and versioned as Git made code

Python License Status

Memoir brings Git-like version control to AI memory systems. Just as Git revolutionized software development by making code history transparent and reliable, Memoir transforms AI memory from unversioned, mutable storage into a versioned, auditable, and cryptographically secure system.

Why Memoir

Long-running AI agents like Claude Code, OpenClaw, and LangGraph-based systems need persistent memory. Current approaches rely on flat files (Memory.md, CLAUDE.md), rolling logs, or ad-hoc storage - fine for simple cases, but inadequate for production multi-agent systems where memory conflicts, state corruption, and debugging complexity become real problems.

Memoir brings engineering rigor to agent memory:

  • Version Control for Agent Memory: Branch experimental strategies, rollback bad states, merge successful approaches - the same workflow that made collaborative software development reliable
  • Semantic Paths over Flat Files: Replace unstructured Memory.md files with hierarchical paths like user.preferences.coding_style that agents can query precisely
  • Automatic Organization: LLM-powered classification so agents store memories without manual path management
  • Debuggable History: Time-travel queries let you understand why an agent behaved a certain way by viewing its memory at any point
  • Agent-Native Interfaces: CLI and SDK for agent integration, TUI and Web UI for human inspection, MCP server for any MCP-compatible client
  • KV-Cache Friendly: Structured, consistent memory format enables KV-cache aware prompting to reduce inference costs and latency
  • Multi-Agent Coordination: Shared memory with cryptographic integrity enables multiple agents to collaborate on the same knowledge base safely

Installation

From Source

git clone https://github.com/yourusername/memoir.git
cd memoir
python -m venv venv
source venv/bin/activate
pip install -e ".[dev]"

From PyPI

pip install memoir-ai

The distribution name on PyPI is memoir-ai (the memoir name was already taken). After install, the Python import is still import memoir and the CLI is still memoir.

Usage

Memoir provides multiple interfaces for different use cases.

Command Line Interface (CLI)

Direct commands for scripting and automation:

# Create a new memory store
memoir new /path/to/store

# Connect and check status
memoir status -s /path/to/store

# Store a memory
memoir remember "I prefer dark mode" -s /path/to/store

# Search memories
memoir recall "preferences" -s /path/to/store

# Branch operations
memoir branch                     # List branches
memoir branch experiment          # Create branch
memoir checkout experiment        # Switch branch
memoir commits                    # View history

Set MEMOIR_STORE environment variable to avoid passing -s each time:

export MEMOIR_STORE=/path/to/store
memoir status
memoir remember "User prefers Python over JavaScript"
memoir recall "programming"

Use --json flag for machine-readable output:

memoir status --json
memoir recall "preferences" --json

Agent Integration

Memoir is designed for AI agent integration. Use --machine-readable (or --json-schema) to get the full CLI schema as JSON:

memoir --machine-readable

This outputs structured JSON with all commands, arguments, options, and exit codes - enabling agents to programmatically understand the CLI without parsing help text:

{
  "name": "memoir",
  "version": "0.1.0",
  "exit_codes": {"0": "success", "1": "error", "2": "not_found", "3": "no_store", "5": "git_failed"},
  "env_vars": {"MEMOIR_STORE": "Default store path", "MEMOIR_JSON": "Always output JSON"},
  "commands": {
    "memory": [{"name": "remember", "arguments": [...], "options": [...]}],
    "branch": [{"name": "checkout", "options": [{"flags": ["--create-if-missing"]}]}]
  }
}

Recommended agent setup:

# Set environment for JSON output
export MEMOIR_STORE=/path/to/store
export MEMOIR_JSON=1

# Quick workflow
memoir remember "learned fact"       # Returns JSON with key, confidence
memoir recall "query" --limit 5      # Returns JSON with memories array
memoir checkout context-branch --create-if-missing  # Auto-create context branches

Exit codes enable reliable error handling: 0 success, 1 error, 2 not found, 3 no store configured, 5 git operation failed.

Web UI

Browser-based interface with visualization:

python -m memoir.ui.server

Open http://localhost:8080 in your browser. Use /demo command to explore with sample data.

Python SDK

For integration into Python applications:

from memoir.sdk import MemoryClient

async def main():
    client = MemoryClient("/path/to/store")

    # Store memory
    result = await client.remember("User prefers dark mode")
    print(f"Stored at: {result.key}")

    # Search memories
    results = await client.recall("preferences", limit=10)
    for mem in results.memories:
        print(f"{mem['path']}: {mem['content']}")

    # Branch operations
    client.branch.create("experiment")
    client.branch.checkout("experiment")
    branches = client.branch.list()

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Synchronous API is also available:

client = MemoryClient("/path/to/store")
result = client.remember_sync("User prefers dark mode")
results = client.recall_sync("preferences")

MCP Server

For integration with MCP-compatible clients:

export MEMOIR_STORE=/path/to/store
memoir-mcp

Add to your MCP client configuration to enable memoir tools.

Development

# Setup
make setup

# Run tests
make test

# Lint and format
make lint
make format

# Run all checks
make ci

Benchmarks

Benchmark the classifier and search performance with different LLM providers:

# Using OpenAI (default)
export OPENAI_API_KEY=your-key
python benchmarks/classifier.py

# Using Anthropic Claude
export ANTHROPIC_API_KEY=your-key
python benchmarks/classifier.py --model claude-haiku-4-5

# Using Google Gemini
export GEMINI_API_KEY=your-key
python benchmarks/classifier.py --model gemini/gemini-1.5-flash

# Using Ollama (local, free)
python benchmarks/classifier.py --model ollama/llama3.2

# Run specific tests
python benchmarks/classifier.py --skip-recall        # Only remember benchmarks
python benchmarks/classifier.py --num-cases 10      # Limit test cases
python benchmarks/classifier.py --verbose           # Detailed output

See all options with python benchmarks/classifier.py --help or make benchmark.

Architecture

Component Description
ProllyTreeStore Git-like versioned storage with cryptographic integrity
IntelligentClassifier LLM-powered classification with 3-level taxonomy paths
IntelligentSearchEngine Single-stage LLM search with prompt caching support
Services Layer Shared business logic for all interfaces

License

Apache License 2.0 - see LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memoir_ai-0.1.3.tar.gz (487.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memoir_ai-0.1.3-py3-none-any.whl (517.3 kB view details)

Uploaded Python 3

File details

Details for the file memoir_ai-0.1.3.tar.gz.

File metadata

  • Download URL: memoir_ai-0.1.3.tar.gz
  • Upload date:
  • Size: 487.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for memoir_ai-0.1.3.tar.gz
Algorithm Hash digest
SHA256 68e4756b6fb0863f92c36cef051857da600d69e44b700a58e0bbc984beb56c03
MD5 9e33c0a69c330cd8df5367c3d187103a
BLAKE2b-256 44675993a8c0c34586e47d0dcf254a734e410982ab3ef302be8a456d1b15247b

See more details on using hashes here.

Provenance

The following attestation bundles were made for memoir_ai-0.1.3.tar.gz:

Publisher: release.yml on zhangfengcdt/memoir

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memoir_ai-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: memoir_ai-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 517.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for memoir_ai-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 b3175fb2bc0be12eb8e4784e392000c1953b85f99fb386b66f8f7efe14f5a92a
MD5 b30b2cfd638ed985a8fd3b1ffbcd6fcd
BLAKE2b-256 dade16a7ee1c34a4dd92eb9ab988368985f2ea426fe9af0a7ec00e658001e3de

See more details on using hashes here.

Provenance

The following attestation bundles were made for memoir_ai-0.1.3-py3-none-any.whl:

Publisher: release.yml on zhangfengcdt/memoir

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page