Skip to main content

Build, maintain, and search your knowledge vault. CLI + MCP server with stale note detection, semantic search, and neuroscience-grounded memory.

Project description

NeuroStack

npm PyPI Python CI License Sponsor

Your AI gets the right knowledge in 15 tokens, not 300. NeuroStack is a local MCP server that indexes your existing Markdown vault and serves it to any AI agent with token-efficient tiered retrieval. It never touches your files. Works with Claude Code, Codex, Gemini CLI, Cursor, Windsurf, or any MCP client.

NeuroStack demo - stats, search, graph, and daily brief

Get started

npm install -g neurostack
neurostack install
neurostack init

No prior config needed. No Python, git, or curl required - the npm package handles everything.

Lite mode (~130 MB) works without a GPU or Ollama. Full mode (default, ~560 MB) adds semantic search and AI summaries via local Ollama.

How it compares

NeuroStack claude-mem basic-memory vestige
Works with Any MCP client Claude Code only Claude Desktop, VS Code Any MCP client
Your vault files Never modified (read-only) Not used (own DB) AI writes to them Not used (own DB)
Indexes existing notes Yes - any Markdown folder No - captures sessions Yes - with write-back No - own memory store
Token-efficient retrieval Tiered: 15 → 75 → 300 tokens Progressive disclosure Full chunks Full chunks
Stale note detection Yes - flags misleading notes No No Prediction error gating
Use-dependent learning Hebbian co-occurrence boost No No FSRS-6 spaced repetition
License Apache-2.0 AGPL-3.0 MIT (core) AGPL-3.0

NeuroStack is for people who already have notes. If you maintain a Markdown vault (Obsidian, Logseq, or plain files) and want your AI tools to search it intelligently without modifying anything, this is the tool. If you want auto-capture of AI sessions, use claude-mem. If you want AI to write notes for you, use basic-memory.

Tiered retrieval

Most retrieval tools dump full document chunks (~300-750 tokens each) into your AI's context window. NeuroStack resolves 80% of queries at the cheapest tier:

Tier Tokens What your AI gets Example
Triples ~15 Structured facts: Alpha API → uses → PostgreSQL 16 Quick lookups, factual questions
Summaries ~75 AI-generated note summary "What is this project about?"
Full content ~300 Actual Markdown content Deep dives, editing context
Auto Varies Starts at triples, escalates only if coverage is low Default for most queries

The result: lower API costs, less context window waste, and more of your AI's attention on actually answering your question.

What it detects that others don't

Stale notes. When a note keeps appearing in search contexts where it doesn't belong, NeuroStack flags it. Your vault accumulates outdated information over time - old decisions, superseded specs, reversed conclusions. Without detection, your AI cites these stale notes confidently. NeuroStack catches them before they pollute results.

Usage patterns. Notes you retrieve together frequently get their connection weights strengthened automatically (Hebbian co-occurrence learning). The search graph learns your actual workflow, not just your file structure.

NeuroStack surfacing stale notes

Key features

Search - find anything by meaning

  • Hybrid semantic + keyword search with cross-encoder reranking
  • Tiered retrieval with automatic cost escalation
  • Topic clustering via Leiden community detection (GraphRAP)
  • Workspace scoping - restrict queries to project subdirectories

Maintain - stop citing outdated notes

  • Stale note detection via prediction error monitoring
  • Excitability decay - recent notes get priority, unused notes fade
  • Auto-indexing - watches your vault for changes in the background

Remember - persistent agent memory

  • AI writes back observations, decisions, conventions, bugs
  • Near-duplicate detection with merge support
  • Session harvesting - extracts insights from Claude Code transcripts automatically
  • Optional TTL for ephemeral memories

Start fast - profession packs

Domain-specific templates, seed notes, and workflow guidance:

neurostack init                    # Interactive setup offers packs
neurostack scaffold devops         # Apply to existing vault
neurostack scaffold --list         # researcher, developer, writer, student, devops, data-scientist

Use with any AI provider

NeuroStack is provider-agnostic. Add it to your MCP config:

{
  "mcpServers": {
    "neurostack": {
      "command": "neurostack",
      "args": ["serve"],
      "env": {}
    }
  }
}

Or use the CLI standalone - pipe output into any LLM:

neurostack search "deployment checklist"
neurostack tiered "auth flow" --top-k 3
neurostack brief
neurostack search -w "work/" "query"    # Workspace scoping
neurostack --json search "query" | jq   # Machine-readable output

Setup guides: Claude Code · Codex · Gemini CLI

Installation modes

Mode What you get Size GPU?
lite FTS5 search, wiki-link graph, stale detection, MCP server ~130 MB No
full (default) + semantic search, AI summaries, cross-encoder reranking ~560 MB No (CPU)
community + GraphRAP topic clustering (Leiden algorithm) ~575 MB No
neurostack install                           # Interactive mode selection
neurostack install --mode full --pull-models  # Non-interactive
Alternative install methods
# PyPI
pipx install neurostack
pip install neurostack                # inside a venv
uv tool install neurostack

# One-line script
curl -fsSL https://raw.githubusercontent.com/raphasouthall/neurostack/main/install.sh | bash

# Lite mode (no ML deps)
curl -fsSL https://raw.githubusercontent.com/raphasouthall/neurostack/main/install.sh | NEUROSTACK_MODE=lite bash

On Ubuntu 23.04+, Debian 12+, Fedora 38+, bare pip install outside a venv is blocked by PEP 668. Use npm, pipx, or uv tool install.

To uninstall:

neurostack uninstall

Architecture

~/your-vault/                        # Your Markdown files (never modified)
~/.config/neurostack/config.toml     # Configuration
~/.local/share/neurostack/
    neurostack.db                    # SQLite + FTS5 knowledge graph
    sessions.db                      # Session transcript index

NeuroStack never modifies your vault files. All data - indexes, embeddings, memories, sessions - lives in its own SQLite databases.

How the neuroscience works

Each core feature is modeled on a specific mechanism from memory neuroscience:

Feature What it does Neuroscience basis
Stale detection Flags notes appearing in wrong contexts Prediction error signals trigger reconsolidation (Sinclair & Bhatt 2022)
Excitability decay Recent notes get priority, old ones fade CREB-elevated neurons preferentially join new memories (Han et al. 2007)
Co-occurrence learning Notes retrieved together strengthen connections Hebbian "fire together, wire together" plasticity
Topic clusters Reveals thematic groups across your vault Neural ensemble formation (Cai et al. 2016)
Tiered retrieval Starts with key facts, escalates only when needed Complementary learning systems (McClelland et al. 1995)

Full citations: docs/neuroscience-appendix.md

All 16 MCP tools
Tool What it does
vault_search Search by meaning or keywords, with tiered depth
vault_summary Pre-computed summary of any note
vault_graph Note's neighborhood - what links to it and what it links to
vault_triples Structured facts (who/what/how) extracted from notes
vault_communities Big-picture questions across topic clusters
vault_context Task-scoped context assembly for session recovery
vault_stats Index health with excitability + memory stats
vault_record_usage Track which notes are "hot"
vault_prediction_errors Surface notes that need review
vault_remember Store a memory (returns near-duplicate warnings + tag suggestions)
vault_update_memory Update a memory in place
vault_merge Merge two memories (unions tags, tracks audit trail)
vault_forget Remove a memory by ID
vault_memories List or search stored memories
vault_harvest Extract insights from Claude Code session transcripts
session_brief Compact briefing when starting a new session
Full CLI reference
# Setup
neurostack install                    # Install/upgrade mode and Ollama models
neurostack init [path] -p researcher  # Interactive setup wizard
neurostack onboard ~/my-notes         # Onboard existing Markdown notes
neurostack scaffold researcher        # Apply a profession pack
neurostack update                     # Pull latest source + re-sync deps
neurostack uninstall                  # Complete removal

# Search & retrieval
neurostack search "query"             # Hybrid search
neurostack tiered "query"             # Tiered: triples -> summaries -> full
neurostack triples "query"            # Knowledge graph triples
neurostack summary "note.md"          # AI-generated note summary
neurostack communities query "topic"  # GraphRAP across topic clusters
neurostack context "task" --budget 2000  # Task-scoped context recovery

# Maintenance
neurostack index                      # Build/rebuild knowledge graph
neurostack watch                      # Auto-index on vault changes
neurostack decay                      # Excitability report
neurostack prediction-errors          # Stale note detection
neurostack backfill [summaries|triples|all]  # Fill gaps in AI data

# Memories
neurostack memories add "text" --type observation  # Store (--ttl 7d)
neurostack memories search "query"    # Search memories
neurostack memories list              # List all
neurostack memories update <id> --content "revised"  # Update in place
neurostack memories merge <target> <source>  # Merge two
neurostack memories forget <id>       # Remove
neurostack memories prune             # Remove expired

# Sessions
neurostack harvest --sessions 5       # Extract session insights
neurostack sessions search "query"    # Search transcripts
neurostack hooks install              # Hourly harvest timer

# Graph
neurostack graph "note.md"            # Wiki-link neighborhood
neurostack communities build          # Run Leiden detection

# Diagnostics
neurostack brief                      # Morning briefing
neurostack stats                      # Index health
neurostack doctor                     # Validate all subsystems
neurostack demo                       # Interactive demo with sample vault

FAQ

Does it modify my vault files? No. All data lives in NeuroStack's own SQLite databases. Your Markdown files are strictly read-only.

Do I need a GPU? No. Lite mode has zero ML dependencies. Full mode uses PyTorch CPU and Ollama.

How large a vault can it handle? Tested with ~5,000 notes. FTS5 search stays fast at any size.

Can I use it without MCP? Yes. The CLI works standalone. Pipe output into any LLM.

Requirements

  • Linux or macOS
  • npm install: Just Node.js - everything else is bootstrapped
  • Full mode: Ollama with nomic-embed-text and a summary model

Get involved

License

Apache-2.0 - see LICENSE.

The optional neurostack[community] extra installs leidenalg (GPL-3.0) and python-igraph (GPL-2.0+). These are isolated behind a runtime import guard and not installed by default.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neurostack-0.7.2.tar.gz (5.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neurostack-0.7.2-py3-none-any.whl (132.2 kB view details)

Uploaded Python 3

File details

Details for the file neurostack-0.7.2.tar.gz.

File metadata

  • Download URL: neurostack-0.7.2.tar.gz
  • Upload date:
  • Size: 5.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for neurostack-0.7.2.tar.gz
Algorithm Hash digest
SHA256 d3029dddc6a9a812b7f4226980b25ef714e30aaad73a9f6a54a2eff22e46db87
MD5 5f41350bbb4e07d0154c53ad8b928b13
BLAKE2b-256 00243433dfdc19afbe7ab22cc84b333c0b63184209f956eae9ddcd285c5bbf4d

See more details on using hashes here.

Provenance

The following attestation bundles were made for neurostack-0.7.2.tar.gz:

Publisher: publish.yml on raphasouthall/neurostack

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neurostack-0.7.2-py3-none-any.whl.

File metadata

  • Download URL: neurostack-0.7.2-py3-none-any.whl
  • Upload date:
  • Size: 132.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for neurostack-0.7.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1cd4907ce86194a314da2406bbe8bce0a4e3bc69c7a106ff8837f93515553c9f
MD5 2c7db9af7e099ac091b21ac713c67f03
BLAKE2b-256 3a0076a1f6546196d19feeca6b09a9769e5a25b48d236c498e919b72d5435a91

See more details on using hashes here.

Provenance

The following attestation bundles were made for neurostack-0.7.2-py3-none-any.whl:

Publisher: publish.yml on raphasouthall/neurostack

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page