Skip to main content

Adaptive memory system for AI agents — universal MCP server for Claude, Codex, VS Code, ChatGPT

Project description

Memory OS AI

Adaptive memory system for AI agents — universal MCP server for Claude Code, Codex CLI, VS Code Copilot, ChatGPT, and any MCP-compatible client.

Tests Coverage Python MCP VS Code Version License

Concept

Memory OS AI transforms your local documents (PDF, DOCX, images, audio) into a semantic memory queryable by any AI model through the MCP (Model Context Protocol).

┌──────────────────────────────────┐
│  AI Client (any MCP-compatible)  │
│  Claude Code / Codex / Copilot   │
│  ChatGPT / custom agents         │
├──────────────────────────────────┤
│         MCP Protocol             │
│   stdio / SSE / Streamable HTTP  │
├──────────────────────────────────┤
│      Memory OS AI Server         │
│  ┌────────┐  ┌───────────────┐   │
│  │ FAISS  │  │ Chat Extractor│   │
│  │ Index  │  │ (4 sources)   │   │
│  └────────┘  └───────────────┘   │
│  ┌────────────────────────────┐  │
│  │ Cross-Project Linking      │  │
│  └────────────────────────────┘  │
└──────────────────────────────────┘

Features

  • 21 MCP tools for memory management, search, chat persistence, project linking, and cloud storage
  • Semantic search with FAISS + SentenceTransformers (all-MiniLM-L6-v2)
  • Multi-format ingestion: PDF, DOCX, TXT, images (OCR), audio (Whisper), PPTX
  • Chat extraction: auto-detects Claude, ChatGPT, Copilot, and terminal history
  • Cross-project linking: share memory across multiple workspaces
  • Cloud storage overflow: auto-backup to Google Drive, iCloud, Dropbox, OneDrive, S3, Azure, Box, B2
  • 3 transports: stdio (default), SSE (--sse), Streamable HTTP (--http)
  • MCP Resources: memory://documents/*, memory://logs/conversation, memory://linked/*
  • Local-first: all data on your machine by default, cloud only when disk runs low

21 MCP Tools

Tool Description
memory_ingest Index a folder of documents into FAISS
memory_search Semantic search across all indexed content
memory_search_occurrences Count keyword occurrences across documents
memory_get_context Get relevant context for the current task
memory_list_documents List all indexed documents with stats
memory_transcribe Transcribe audio files (Whisper)
memory_status Engine status (index size, model, device)
memory_compact Compact/deduplicate the FAISS index
memory_chat_sync Sync messages from configured chat sources
memory_chat_source_add Add a chat source (Claude, ChatGPT, etc.)
memory_chat_source_remove Remove a chat source
memory_chat_status Status of all chat sources
memory_chat_auto_detect Auto-detect chat workspaces on disk
memory_session_brief Full memory briefing for session start
memory_chat_save Persist conversation messages to memory
memory_project_link Link another project's memory
memory_project_unlink Unlink a project
memory_project_list List all linked projects
memory_cloud_configure Configure cloud storage backend for overflow
memory_cloud_status Show local disk + cloud storage status
memory_cloud_sync Push/pull/auto-sync between local and cloud

Quick Start

Prerequisites

  • Python 3.10+
  • Optional: tesseract (OCR), ffmpeg (audio), antiword (legacy .doc)
# macOS
brew install tesseract ffmpeg antiword

# Ubuntu/Debian
sudo apt-get install tesseract-ocr ffmpeg antiword

Install

git clone https://github.com/romainsantoli-web/Memory-os-ai.git
cd Memory-os-ai
pip install -e ".[dev,audio]"

Auto-Setup (recommended)

# Setup for your AI client:
memory-os-ai setup claude-code    # Claude Code
memory-os-ai setup codex          # Codex CLI
memory-os-ai setup vscode         # VS Code Copilot
memory-os-ai setup claude-desktop # Claude Desktop
memory-os-ai setup chatgpt        # ChatGPT (manual bridge)
memory-os-ai setup all            # All of the above

# Check status:
memory-os-ai setup status

Manual Start

# stdio (default — Claude Code, VS Code, Codex)
memory-os-ai

# SSE transport (port 8765)
memory-os-ai --sse

# Streamable HTTP (port 8765)
memory-os-ai --http

Project Structure

Memory-os-ai/
├── src/memory_os_ai/
│   ├── __init__.py          # Public API: MemoryEngine, ChatExtractor, TOOL_MODELS
│   ├── __main__.py          # python -m memory_os_ai entry point
│   ├── server.py            # MCP server — 21 tools, 3 transports, resources
│   ├── engine.py            # FAISS engine — indexing, search, compact, session brief
│   ├── cloud_storage.py     # 8 cloud backends (GDrive, iCloud, Dropbox, OneDrive, S3, Azure, Box, B2)
│   ├── storage_router.py    # Smart routing: local-first with cloud overflow
│   ├── models.py            # 21 Pydantic models + TOOL_MODELS registry
│   ├── chat_extractor.py    # 4 extractors: Claude, ChatGPT, Copilot, terminal
│   ├── instructions.py      # MEMORY_INSTRUCTIONS for AI clients
│   └── setup.py             # Auto-setup CLI for 5 AI clients
├── bridges/
│   ├── claude-code/         # CLAUDE.md with memory rules
│   ├── claude-desktop/      # config.json for Claude Desktop
│   ├── codex/               # AGENTS.md for Codex CLI
│   ├── vscode/              # mcp.json for VS Code
│   └── chatgpt/             # mcp-connection.json for ChatGPT
├── tests/                   # 410+ tests — 96% coverage
│   ├── test_memory.py       # Engine + models (60 tests)
│   ├── test_chat_extractor.py  # Chat extraction (39 tests)
│   ├── test_bridges.py      # Bridge configs (22 tests)
│   ├── test_gaps.py         # Compact, cross-project, resources (34 tests)
│   ├── test_server_dispatch.py # Server dispatch + async (61 tests)
│   ├── test_setup.py        # Setup CLI targets
│   ├── test_z_coverage_boost.py # Coverage boost (35 tests)
│   └── test_zz_full_coverage.py # Full coverage (97 tests)
├── pyproject.toml           # v3.1.0 — deps, scripts, coverage config + cloud optional deps
├── Dockerfile               # Container deployment
└── README.md

Cloud Storage (v3.1.0)

When local disk runs low (< 500 MB free by default), memory data automatically overflows to a configured cloud backend.

Supported Providers

Provider Install Credentials
Google Drive pip install memory-os-ai[cloud-gdrive] credentials_json or token_json + folder_id
iCloud Drive (macOS native, no extra deps) container name (default: memory-os-ai)
Dropbox pip install memory-os-ai[cloud-dropbox] access_token + folder
OneDrive (auto-detects mount) or Graph API mount_path or access_token
Amazon S3 pip install memory-os-ai[cloud-s3] bucket, aws_access_key_id, aws_secret_access_key
Azure Blob pip install memory-os-ai[cloud-azure] connection_string + container
Box pip install memory-os-ai[cloud-box] access_token + folder_id
Backblaze B2 pip install memory-os-ai[cloud-b2] application_key_id, application_key, bucket_name
All providers pip install memory-os-ai[cloud-all]

Usage

# Configure via environment (auto-activates on server start)
export MEMORY_CLOUD_PROVIDER=icloud
export MEMORY_CLOUD_CONFIG='{"container": "memory-os-ai"}'
memory-os-ai

# Or configure at runtime via MCP tool:
#   memory_cloud_configure(provider="s3", credentials={"bucket": "my-bucket", ...})
#   memory_cloud_status()       → local disk + cloud usage
#   memory_cloud_sync("push")   → backup to cloud
#   memory_cloud_sync("pull")   → restore from cloud
#   memory_cloud_sync("auto")   → offload if disk low

Configuration

Environment Variables

Variable Default Description
MEMORY_CACHE_DIR ~/.memory-os-ai Cache / FAISS index directory
MEMORY_MODEL all-MiniLM-L6-v2 SentenceTransformer model name
MEMORY_API_KEY (none) Optional API key for SSE/HTTP auth
MEMORY_CLOUD_PROVIDER (none) Cloud provider name (see table above)
MEMORY_CLOUD_CONFIG (none) JSON credentials or path to JSON file
MEMORY_DISK_THRESHOLD 524288000 Bytes free before cloud overflow (500 MB)

Development

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
python -m pytest tests/ -v

# Run with coverage
python -m pytest tests/ --cov=memory_os_ai --cov-report=term-missing

# Coverage threshold: 80% (enforced in pyproject.toml)

License

GNU Lesser General Public License v3.0 (LGPL-3.0). See LICENSE for details.

For commercial licensing, contact romainsantoli@gmail.com.

Part of the OpenClaw Ecosystem

Memory OS AI is designed to work alongside the OpenClaw agent infrastructure:

Repo Description
setup-vs-agent-firm Factory for AI agent firms — 28 SKILL.md, 5 SOUL.md, 15 sectors
mcp-openclaw-extensions 115 MCP tools — security audit, A2A bridge, fleet management
Memory OS AI (this repo) Semantic memory + chat persistence — universal MCP bridge

Together they form a complete stack: memory (this repo) → skills & souls (setup-vs-agent-firm) → security & orchestration (mcp-openclaw-extensions).

Contributing

Contributions welcome! See CONTRIBUTING.md for guidelines.


⚠️ Contenu généré par IA — validation humaine requise avant utilisation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memory_os_ai-3.2.0.tar.gz (93.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memory_os_ai-3.2.0-py3-none-any.whl (60.1 kB view details)

Uploaded Python 3

File details

Details for the file memory_os_ai-3.2.0.tar.gz.

File metadata

  • Download URL: memory_os_ai-3.2.0.tar.gz
  • Upload date:
  • Size: 93.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memory_os_ai-3.2.0.tar.gz
Algorithm Hash digest
SHA256 04f46cda52d80a8278552cffaca7dee09989602c615184702137ad7bb594c59f
MD5 7a0ff90a372dbe459a4f8c8a74ffa41d
BLAKE2b-256 083fe5de27fc9b9765c21600d753213c3ba9a3c97f4788252a0217ffc5697522

See more details on using hashes here.

File details

Details for the file memory_os_ai-3.2.0-py3-none-any.whl.

File metadata

  • Download URL: memory_os_ai-3.2.0-py3-none-any.whl
  • Upload date:
  • Size: 60.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memory_os_ai-3.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 37e8a4ea82af6f4053819059ca2583222c4d69176a88e32cd479f70f15acd7c7
MD5 816f7611262b3b8355d0b9f0a6ef7469
BLAKE2b-256 c4bd0091a64ebbf88bff037b97c1fe72540dbcfd510293ed952eec2fe3b73408

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page