Your codebase, remembered. AI-powered living memory for git projects.
Project description
memento-ai
Your codebase, remembered.
memento watches your git commits and maintains a living memory of your project — readable by both humans and AI.
How it works
git commit → memento analyzes the diff → updates markdown memory files
Memory files live in .memento/memory/ — plain markdown, diffable, committable to your repo.
Quick start
pip install memento-ai
cd your-project
memento init
memento process --all
That's it. Your project now has memory.
Ask questions
memento ask "what does the auth system do?"
memento ask "what changed in the last few commits?"
Features
- Incremental — only processes new commits
- Offline-first — works with Ollama or embedded local models (zero API cost)
- Multi-provider — OpenAI, Anthropic, Claude CLI, Ollama, local inference
- Human-readable — memory is plain markdown
- Git-native — auto-processes via post-commit hook
- Private — your code never leaves your machine (with local providers)
LLM Providers
| Provider | Setup | Cost |
|---|---|---|
| Claude CLI | pip install memento-ai + Claude Code installed |
Free (Max/Pro plan) |
| Ollama | ollama pull qwen2.5-coder:7b |
Free |
| Local embedded | pip install memento-ai[local] |
Free |
| OpenAI | Set OPENAI_API_KEY |
Pay per token |
| Anthropic | Set ANTHROPIC_API_KEY |
Pay per token |
Using Ollama (recommended for privacy)
ollama pull qwen2.5-coder:7b
Edit .memento/config.toml:
[llm]
provider = "openai"
model = "qwen2.5-coder:7b"
base_url = "http://localhost:11434/v1"
Using embedded local model (zero setup)
pip install memento-ai[local]
Edit .memento/config.toml:
[llm]
provider = "local"
# Auto-downloads Qwen2.5-Coder-3B (~2GB) on first run
Using Claude CLI (free with Claude Max/Pro)
[llm]
provider = "claude-cli"
Requires Claude Code installed and logged in.
Commands
| Command | Description |
|---|---|
memento init |
Initialize .memento/, install post-commit hook |
memento process |
Process new commits since last run |
memento process --all |
Process entire git history |
memento ask "question" |
Ask about your project |
memento status |
Show status: modules, commits processed |
memento forget |
Clear all memory, start fresh |
memento serve |
Start MCP server (stdio) |
memento export --format FMT |
Export memory (claude, cursor, copilot) |
Configuration
.memento/config.toml:
[llm]
provider = "claude-cli" # openai, anthropic, claude-cli, local
model = "gpt-4o-mini" # model name (provider-specific)
base_url = "https://..." # API base URL (openai provider)
temperature = 0.3
max_tokens = 2048
[processing]
chunk_size = 4000 # max diff size before chunking
summary_every = 10 # regenerate summary every N commits
ignore_patterns = ["*.lock", "dist/*"]
[memory]
dir = "memory" # subdirectory for memory files
max_module_size = 5000 # max lines per module
Memory structure
.memento/
├── config.toml # your configuration
├── state.json # processing state (gitignored)
└── memory/
├── SUMMARY.md # auto-generated project overview
├── api-endpoints.md # module: API routes and patterns
├── auth-system.md # module: authentication logic
└── database.md # module: schema and queries
Modules are created and maintained automatically based on what the LLM finds in your commits.
MCP Server
memento exposes project memory via the Model Context Protocol, so any MCP-compatible AI tool can access your project memory automatically.
pip install "memento-ai[mcp]"
Claude Code
Add to ~/.claude/mcp.json:
{
"mcpServers": {
"memento": {
"command": "memento-mcp"
}
}
}
Cursor
Add via Settings → MCP Servers:
{
"mcpServers": {
"memento": {
"command": "memento-mcp"
}
}
}
Available MCP tools
| Tool | Description |
|---|---|
memento_ask |
Ask a question about the project (uses LLM) |
memento_search |
Fast text search across memory (no LLM) |
memento_status |
Show modules, commits processed, last run |
memento_process |
Process new commits on-demand |
Available MCP resources
| URI | Description |
|---|---|
memento://summary |
Project summary |
memento://module/{name} |
Individual memory module |
memento://all |
All memory concatenated |
Export
Export project memory to files that AI tools read automatically:
memento export --format claude # → CLAUDE.md
memento export --format cursor # → .cursor/rules/memento.mdc
memento export --format copilot # → .github/copilot-instructions.md
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memento_ai-0.2.0.tar.gz.
File metadata
- Download URL: memento_ai-0.2.0.tar.gz
- Upload date:
- Size: 22.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d874412793b811db4232dd45497ccc6f3f3ea4149ccc536111332b6b308545b8
|
|
| MD5 |
6a822e7dc27e4777b7a439038b58a265
|
|
| BLAKE2b-256 |
8fefc2ade1e82f8f75555ff5de82f755e67671ef3db5bb4bef51bffbc49204bd
|
Provenance
The following attestation bundles were made for memento_ai-0.2.0.tar.gz:
Publisher:
publish.yml on hernanqwz/memento
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memento_ai-0.2.0.tar.gz -
Subject digest:
d874412793b811db4232dd45497ccc6f3f3ea4149ccc536111332b6b308545b8 - Sigstore transparency entry: 1004770535
- Sigstore integration time:
-
Permalink:
hernanqwz/memento@abade13b843494ef1cc84e022af59b99b64b09e8 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/hernanqwz
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@abade13b843494ef1cc84e022af59b99b64b09e8 -
Trigger Event:
release
-
Statement type:
File details
Details for the file memento_ai-0.2.0-py3-none-any.whl.
File metadata
- Download URL: memento_ai-0.2.0-py3-none-any.whl
- Upload date:
- Size: 25.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7b402029dd53fc80b68348598895d25970ba0225b2118a5c9dcb7299ac48e427
|
|
| MD5 |
e8a452d7e8bd3f343741f9b77d0c5572
|
|
| BLAKE2b-256 |
66c557acbdc2743e033ad9c1a943b69d9a7a2b15fd3da48cde2b396fb8fb859b
|
Provenance
The following attestation bundles were made for memento_ai-0.2.0-py3-none-any.whl:
Publisher:
publish.yml on hernanqwz/memento
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memento_ai-0.2.0-py3-none-any.whl -
Subject digest:
7b402029dd53fc80b68348598895d25970ba0225b2118a5c9dcb7299ac48e427 - Sigstore transparency entry: 1004770536
- Sigstore integration time:
-
Permalink:
hernanqwz/memento@abade13b843494ef1cc84e022af59b99b64b09e8 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/hernanqwz
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@abade13b843494ef1cc84e022af59b99b64b09e8 -
Trigger Event:
release
-
Statement type: