Skip to main content

File-based memory system for AI coding assistants

Project description

Mind

Mind gives Claude a mind - not just memory across sessions, but focus within them. It remembers what worked, what didn't, and what it's supposed to be building.

When you're vibe coding with Claude, it forgets everything between sessions. What you decided, what broke, what worked - gone. Even worse, in long sessions it starts suggesting the same failed fixes over and over.

Mind fixes both problems with two-layer memory:

  • MEMORY.md - Long-term memory across sessions
  • SESSION.md - Short-term focus within a session

Why Mind?

3 steps. Install, init, connect. Done.

Fully automated. Memory just works - no commands required:

  • Claude writes memories as it works
  • Session gaps auto-detected (30 min)
  • Learnings auto-promoted to long-term memory
  • Context auto-injected into CLAUDE.md
  • Reminders auto-surface when due or when keywords match

Optional tools are there when you want them, but the core memory flow runs hands-free.

Two-layer memory:

  • Cross-session recall (MEMORY.md)
  • Within-session focus (SESSION.md)

Zero friction. Claude writes to files, MCP reads them lazily. No database, no cloud, no sync issues.

Human-readable. Plain .md files you can open, edit, or git-track anytime.

Open source. See exactly how it works. No black box.


Ready to Give Claude a Mind?

3 steps. Zero friction.

1. Install Mind

pip install vibeship-mind

2. Initialize in your project

Open a terminal in your project folder, then run:

python -m mind init

3. Connect to Claude Code

Copy and paste this to Claude:

Add Mind MCP server to my config. Use command "python" with args ["-m", "mind", "mcp"]

Claude will set it up for you. Restart Claude Code after, and Mind will work automatically.

Manual setup (if you prefer)

Add to your MCP config file:

Mac/Linux: ~/.claude.json Windows: %USERPROFILE%\.claude.json

{
  "mcpServers": {
    "mind": {
      "command": "python",
      "args": ["-m", "mind", "mcp"]
    }
  }
}

Save the file, then restart Claude Code.


Alternative: Install from source
git clone https://github.com/vibeforge1111/vibeship-mind.git
cd vibeship-mind
uv sync
uv run mind init

MCP config for source install:

{
  "mcpServers": {
    "mind": {
      "command": "uv",
      "args": ["--directory", "/path/to/vibeship-mind", "run", "mind", "mcp"]
    }
  }
}

What's New in 2.2.0

  • Semantic Search - mind_search() uses TF-IDF similarity, not just keywords
  • Loop Detection - Warns when you're about to repeat a rejected approach
  • Smart Promotion - Deduplicates memories, links related entries
  • Memory <-> Session Flow - Relevant memories surface when logging blockers

MCP Tools (12 total)

Core:

Tool What it does
mind_recall() Load session context - CALL FIRST every session
mind_log(msg, type) Log to session or memory (routes by type)

Type routing for mind_log():

  • SESSION.md: experience, blocker, assumption, rejected
  • MEMORY.md: decision, learning, problem, progress
  • SELF_IMPROVE.md: feedback, preference, blind_spot, skill
  • Special: reinforce (boosts pattern confidence)

Reading:

Tool What it does
mind_session() Check current session state
mind_search(query) Semantic search past memories
mind_status() Check memory health
mind_reminders() List pending reminders

Actions:

Tool What it does
mind_blocker(desc) Log blocker + auto-search memory for solutions
mind_remind(msg, when) Set reminder - time or context-based
mind_reminder_done(index) Mark a reminder as done
mind_edges(intent) Check for gotchas before risky code
mind_checkpoint() Force process pending memories
mind_add_global_edge() Add cross-project gotcha

How It Works

Long-term (MEMORY.md): Permanent knowledge - decisions, learnings, problems, progress. mind_recall() loads this as context each session.

Short-term (SESSION.md): Working memory buffer:

  • Experience - Raw moments, thoughts, what's happening
  • Blockers - Things stopping progress
  • Rejected - What didn't work and why
  • Assumptions - What you're assuming true

When a new session starts (30 min gap), valuable items get promoted from SESSION.md to MEMORY.md automatically.

See docs/HOW_IT_WORKS.md for the full architecture.


Reminders

Mind supports two types of reminders:

Time-based:

  • "tomorrow", "in 3 days", "next session", "2025-12-20"

Context-based:

  • "when I mention auth" - triggers when relevant keywords come up
  • "when we work on database" - Claude sees keywords and surfaces reminder naturally

Example: "Remind me to check the security audit when we work on auth"

Reminders are stored in .mind/REMINDERS.md and shown in mind_recall() output.


Initialize Mind in Any Project

Open a terminal in your project folder and run:

python -m mind init

That's it! This creates .mind/MEMORY.md and .mind/SESSION.md.


Quick Commands

# Check if everything is working
python -m mind doctor

# See what Mind extracted from your notes
python -m mind parse

# Check project status
python -m mind status

# List all registered projects
python -m mind list

The Problem This Solves

Across sessions:

  • "What did we decide yesterday?" -> Forgotten
  • "What gotchas did we hit?" -> Re-discovered every time

Within sessions:

  • "Didn't we already try that?" -> Suggests same failed fix 3 times
  • "What are we building again?" -> Drifts into rabbit holes

Mind fixes both. Two layers of memory, zero friction.


File Structure

your-project/
├── .mind/
│   ├── MEMORY.md     <- Long-term memory (persists)
│   ├── SESSION.md    <- Short-term focus (cleared each session)
│   ├── REMINDERS.md  <- Time and context-based reminders
│   ├── config.json   <- Feature flags for experiments
│   └── state.json    <- Timestamps for session detection
└── CLAUDE.md         <- Mind injects context here

Troubleshooting

Problem Fix
"Command not found" Use python -m mind instead of just mind
Nothing being captured Use keywords: decided, problem, learned, gotcha
Claude repeating mistakes Tell Claude: "Check SESSION.md" or "Add to Rejected Approaches"
Need to check health Run python -m mind doctor

Documentation


License

Apache 2.0 - See LICENSE for details.


Built by @meta_alchemist

A vibeship.co ecosystem project

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vibeship_mind-2.3.0.tar.gz (13.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vibeship_mind-2.3.0-py3-none-any.whl (80.8 kB view details)

Uploaded Python 3

File details

Details for the file vibeship_mind-2.3.0.tar.gz.

File metadata

  • Download URL: vibeship_mind-2.3.0.tar.gz
  • Upload date:
  • Size: 13.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.16 {"installer":{"name":"uv","version":"0.9.16","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for vibeship_mind-2.3.0.tar.gz
Algorithm Hash digest
SHA256 57fa5d95f21e1ee192f14021df744ae5a5af67480b63a8ffa1d482497cf0b1d8
MD5 3b7753000b334daef29a468438c721f4
BLAKE2b-256 df46a7b0718b7d4ce8634ab2f019f6c2a603dfd6ecd7143594a4055ef8fbe3b2

See more details on using hashes here.

File details

Details for the file vibeship_mind-2.3.0-py3-none-any.whl.

File metadata

  • Download URL: vibeship_mind-2.3.0-py3-none-any.whl
  • Upload date:
  • Size: 80.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.16 {"installer":{"name":"uv","version":"0.9.16","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for vibeship_mind-2.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 befa4686c5ae66b23f430bc8db50825df5dca69ed5ff8a64d35df5c0584c3876
MD5 6f830a5a05e12793849b8974c22df281
BLAKE2b-256 a0f93ed5f1e9a45f1f4ff326c22985187061be849d1357b6ae23fb5f9dd7c5c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page