A Claude Code MCP plugin that audits and cleans up your AI memory store
Project description
Memory Quality MCP
A Claude Code MCP plugin that audits and cleans up your AI memory.
中文版 · Report a Bug · Request a Feature
Claude Code v2.1.59+ automatically saves memories from your conversations. Over time, your memory store accumulates:
- Stale memories — "working on Project X this week" from months ago
- Junk memories — offhand remarks treated as permanent facts
- Conflicting memories — "prefers detailed comments" and "keep code clean and minimal" coexisting
- Misrecorded memories — AI over-interpreted a one-time comment as a fixed habit
This plugin audits your memory store with a 4-dimension quality score (Importance / Recency / Credibility / Accuracy) and gives you actionable cleanup recommendations — with a visual dashboard.
Requirements
- Claude Code v2.1.59+ (run
claude --versionto check) - Python 3.10+
- LLM API Key — OpenAI, Kimi, MiniMax, or Anthropic (any one will do)
Installation
Option 1: uvx (recommended — no manual install)
Add to your Claude Code MCP config (~/.claude/settings.json or your project's .claude/settings.json):
{
"mcpServers": {
"memory-quality": {
"command": "uvx",
"args": ["memory-quality-mcp"],
"env": {
"OPENAI_API_KEY": "your-key-here"
}
}
}
}
Option 2: pip
pip install memory-quality-mcp
Then in your MCP config:
{
"mcpServers": {
"memory-quality": {
"command": "memory-quality-mcp",
"env": {
"MINIMAX_API_KEY": "your-key-here"
}
}
}
}
Configure your LLM
Set one of these environment variables in the env field above:
| Provider | Env Variable | Default Model |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
gpt-4o-mini |
| Kimi | KIMI_API_KEY |
moonshot-v1-8k |
| MiniMax | MINIMAX_API_KEY |
MiniMax-M2.5 |
| Anthropic | ANTHROPIC_API_KEY |
claude-haiku-4-5 |
The plugin auto-detects which provider to use based on whichever key is set.
For advanced config (custom model, thresholds), edit ~/.memory-quality-mcp/config.yaml — generated automatically on first run.
Usage
After setup, restart Claude Code and talk to it naturally:
1. Try the demo first (no memories needed)
Open the memory dashboard in demo mode
Claude calls memory_dashboard(demo=True) → opens a browser page with example data so you can see what the tool does before running it on your real memories.
2. Quick health check (no LLM cost)
Check my memory store health
Returns: total memories across all projects, stale count, index usage, estimated LLM calls for a full report.
3. Full quality analysis
Run a detailed memory quality analysis
Scores every memory on 4 dimensions, detects conflicts, flags rule violations. Results are cached — cleanup and dashboard reuse them without calling the LLM again.
Cost estimate:
8–9 LLM calls for 50 memories ($0.01 with gpt-4o-mini)
4. Open visual dashboard
Open the memory health dashboard
Opens a local HTML page in your browser:
- Health score ring (0–100)
- Summary stats (kept / review / delete)
- Collapsible memory list — click any item to expand score details
5. Clean up
Clean up the memories marked for deletion
Claude previews with memory_cleanup(dry_run=True) first, then executes on confirmation.
Safety guarantees:
- Preview before every deletion
- Auto-backup to
.trash/<timestamp>/before deleting - Never silent-deletes anything
6. Score a single memory (debug)
Score this memory: "User always codes late at night"
7. Analyze a specific project
Analyze memories for ~/my-project only
How scoring works
| Dimension | Weight | What it measures |
|---|---|---|
| Importance | 40% | How useful is this for future conversations? |
| Recency | 25% | Is this information still accurate? |
| Credibility | 15% | Is there a clear source (user stated it explicitly)? |
| Accuracy | 20% | Did the AI record it faithfully, without over-interpreting? |
Score > 3.5 → Keep | 2.5–3.5 → Review | < 2.5 → Delete
Typical workflow
You: Check my memory health
Claude: [memory_audit()]
47 memories across 3 projects
- Possibly stale: 8
- Project memories past threshold: 3
- MEMORY.md usage: 23% (46/200 lines)
Estimated ~9 LLM calls for a full report.
You: Run the full analysis
Claude: [memory_report()]
47 memories | 🗑 delete 8 | 🔄 review 5 | ✅ keep 34
⚡ 1 conflict found
- 🔴 feedback_comments_a.md × feedback_comments_b.md
One says "detailed comments", the other says "minimal comments"
🗑 Suggested deletions (8)
- project_q1_plan.md [project] · 120 days ago
Score 1.5 · project memory past 90-day threshold
Report cached — cleanup won't need to re-analyze
You: Open the dashboard
Claude: [memory_dashboard()]
✅ Dashboard opened in browser
You: Clean those up
Claude: [memory_cleanup(dry_run=True)]
🔍 Preview (nothing deleted yet)
8 memories will be removed: project_q1_plan.md ...
You: Confirm
Claude: [memory_cleanup(dry_run=False)]
✅ Cleaned up 8 memories
Backup: ~/.claude/.../memory/.trash/20260405_143022/
MEMORY.md index updated
FAQ
Q: "No memory files found"
Requires Claude Code v2.1.59+. Check:
claude --version # must be >= v2.1.59
ls ~/.claude/projects/ # should list your projects
If the version is new enough but no files exist yet, Claude hasn't decided anything is worth remembering yet. Keep using Claude Code normally — files will appear within a few sessions.
Want to see the tool in action first? Run memory_dashboard(demo=True).
Q: Are the scores accurate?
Scores are suggestions, not commands. You make the final call. Every deletion requires confirmation and is backed up. If you find consistent scoring errors, please open a Wrong Score issue — each report directly improves the model.
Q: Which LLMs are supported?
Any OpenAI-compatible API. Built-in presets: OpenAI, Kimi, MiniMax, Anthropic. Custom providers via MEMORY_QUALITY_BASE_URL environment variable.
Q: Will it delete something important?
No. Every operation: ① shows a preview ② requires explicit confirmation ③ auto-backs up to .trash/ before deleting.
Q: Where are my memory files?
ls ~/.claude/projects/*/memory/
Or type /memory inside Claude Code to browse and edit them directly.
Contributing
Found a scoring error? Tell us — your feedback directly calibrates the model.
Bug or feature idea? Open an issue.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memory_quality_mcp-0.2.0.tar.gz.
File metadata
- Download URL: memory_quality_mcp-0.2.0.tar.gz
- Upload date:
- Size: 1.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
996aed80a4a3d30ab85ecae51afcd8a1b49d847e60eb20ce5026258605a7cdb4
|
|
| MD5 |
7752733b56dccc3f3c18d2f313f54855
|
|
| BLAKE2b-256 |
90edae4f493d7d1c16b273f2ee9c6ad21474411039db0de7e04d72cf80ceba42
|
File details
Details for the file memory_quality_mcp-0.2.0-py3-none-any.whl.
File metadata
- Download URL: memory_quality_mcp-0.2.0-py3-none-any.whl
- Upload date:
- Size: 66.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a07ac42f72178bda75f470e3e04e704291ba4a3a0a23ec83f592cb0136306aa3
|
|
| MD5 |
8b14b13b78709de4988586101308f004
|
|
| BLAKE2b-256 |
07aa96c3dea034e97c1e85a4f52f8b58ebb465cf3a6b66efb6f969c514c11f38
|