Skip to main content

Persistent memory for AI agents — 56 MCP tools, spreading activation recall, neuroscience-inspired consolidation. Works with Claude, GPT, Gemini.

Project description

NeuralMemory

GitHub stars PyPI Downloads CI Python 3.11+ License: MIT VS Code OpenClaw Plugin

Your AI agent forgets everything between sessions. Neural Memory gives it a brain.

Neural Memory — spreading activation

Memories are stored as interconnected neurons and recalled through spreading activation — the same way the human brain works. No vector database. No API calls. No monthly embedding bill.

pip install neural-memory

Restart your AI tool. Your agent now remembers — no init needed, the MCP server auto-initializes on first use.


3 Tools. That's It.

60 MCP tools are available, but you only need three:

Tool What it does
nmem_remember Store a memory — auto-detects type, tags, and connections
nmem_recall Recall through spreading activation — related memories surface naturally
nmem_health Brain health score (A–F) with actionable fix suggestions

Everything else — sessions, context loading, habit tracking, maintenance — works transparently in the background.

All 60 MCP tools →


What Makes This Different

Most memory tools are search engines. Neural Memory is a graph that thinks.

When you ask "Why did Tuesday's outage happen?", a vector database returns the most similar sentence. Neural Memory traces the chain:

outage ← CAUSED_BY ← JWT expiry ← SUGGESTED_BY ← Alice's review

Relationships are explicitCAUSED_BY, LEADS_TO, RESOLVED_BY, CONTRADICTS — so your agent doesn't just find memories, it reasons through them.

Search-based (RAG) Neural Memory
Retrieval Similarity score Graph traversal
Relationships None 24 explicit types
LLM required Yes (embedding) No — fully offline
Multi-hop reasoning Multiple queries One traversal
Memory lifecycle Static Decay, reinforcement, consolidation
Cost per 1K queries ~$0.02 $0.00

Cloud Sync — Your Data, Your Infrastructure

Sync your brain across every machine. Unlike other memory tools, we never store your data.

Laptop ←→ Your Cloudflare Worker ←→ Desktop
                  ↕
              Your Phone

You deploy the sync hub to your own Cloudflare account (free tier). Your D1 database, your encryption key, your data. We provide the code — you own the infrastructure.

nmem sync              # push/pull changes
nmem sync --auto       # auto-sync after every remember/recall

Sync uses Merkle delta — only diffs travel, not the full brain. Fast, efficient, private.

Cloud Sync setup guide →


Features

Memory & Recall

  • 14 memory types — fact, decision, error, insight, preference, workflow, instruction, and more
  • Spreading activation — memories surface by association, not keyword match
  • Cognitive reasoning — hypothesize, submit evidence, make predictions, verify with Bayesian confidence

Knowledge Ingestion

  • Train from documents — PDF, DOCX, PPTX, HTML, JSON, XLSX, CSV ingested into permanent brain knowledge
  • Import adapters — migrate from ChromaDB, Mem0, Cognee, Graphiti, LlamaIndex in one command

Lifecycle & Storage

  • Memory consolidation — episodic memories mature into semantic knowledge over time
  • Compression tiers — full → summary → essence → ghost → metadata (reclaim storage, keep meaning)
  • Brain versioning — snapshot, rollback, diff, transplant memories between brains

Community

  • Brain Store — browse, import, and publish pre-built brains to the community marketplace
  • 3 seed brains — Python Best Practices, Git Workflows, Docker Essentials (ready to import)

Ecosystem

  • Web dashboard — 7-page React UI with graph visualization, health radar, timeline, mindmap, Brain Store
  • VS Code extension — memory tree, graph explorer, CodeLens, WebSocket sync (Marketplace →)
  • Safety — Fernet encryption, sensitive content auto-detection, parameterized SQL, path validation
  • Telegram backup — send brain .db files to Telegram for offsite backup

Quick Examples

# Store memories (type auto-detected)
nmem remember "Fixed auth bug with null check in login.py:42"
nmem remember "We decided to use PostgreSQL" --type decision
nmem todo "Review PR #123" --priority 7

# Recall
nmem recall "auth bug"
nmem recall "database decision" --depth 2

# Brain management
nmem brain list && nmem brain health
nmem brain export -o backup.json

# Sync across devices
nmem sync --full

# Web dashboard
nmem serve    # http://localhost:8000/dashboard
import asyncio
from neural_memory import Brain
from neural_memory.storage import InMemoryStorage
from neural_memory.engine.encoder import MemoryEncoder
from neural_memory.engine.retrieval import ReflexPipeline

async def main():
    storage = InMemoryStorage()
    brain = Brain.create("my_brain")
    await storage.save_brain(brain)
    storage.set_brain(brain.id)

    encoder = MemoryEncoder(storage, brain.config)
    await encoder.encode("Met Alice to discuss API design")
    await encoder.encode("Decided to use FastAPI for backend")

    pipeline = ReflexPipeline(storage, brain.config)
    result = await pipeline.query("What did we decide about backend?")
    print(result.context)  # "Decided to use FastAPI for backend"

asyncio.run(main())

Neural Memory Pro

Free Neural Memory is complete — 60 tools, unlimited memories, fully offline. You never have to pay.

But past 10K memories, things change. Keyword matching misses semantically related content. Consolidation slows to minutes. Storage grows unbounded. If your agent's brain is getting big, Pro makes it smart.

Free recalls by keyword. Pro recalls by meaning.

Query: "authentication improvements"

Free (FTS5):  2 results — exact matches only
Pro  (HNSW):  7 results — includes "JWT rotation", "session hardening", "OAuth migration"

What Pro adds

Free (SQLite) Pro (InfinityDB)
Recall Keyword match (FTS5) Semantic similarity (HNSW)
Speed at 1M neurons ~500ms <5ms
Scale tested ~50K neurons 2M+ neurons
Compression Text-level trimming 5-tier vector compression (97% savings)
Consolidation O(N²) brute-force O(N×k) HNSW clustering
Storage per 1M ~5 GB ~1 GB
Cloud sync Manual push/pull Merkle delta (auto, diffs only)

Pro-exclusive features

  • Cone Queries — adjustable semantic recall. Narrow the cone for precision, widen for exploration
  • Smart Merge — consolidation that scales to 1M+ neurons using HNSW neighbor clustering
  • Directional Compression — compress along multiple semantic axes while preserving meaning
  • 5-Tier Auto Lifecycle — memories flow from float32 → float16 → int8 → binary → metadata. Auto-promote on access

Get Pro

pip install neural-memory                 # Pro features included
nmem pro activate YOUR_LICENSE_KEY       # activate license
nmem pro status                          # verify: Pro: Active

$9/mo — 30-day money-back guarantee. All free tools keep working. Downgrade anytime, keep your data.

Pro quickstart → · Full comparison → · Pricing →


Setup by Tool

Claude Code (Plugin)
/plugin marketplace add nhadaututtheky/neural-memory
/plugin install neural-memory@neural-memory-marketplace
Cursor / Windsurf / Other MCP Clients
pip install neural-memory

Add to your editor's MCP config:

{
  "mcpServers": {
    "neural-memory": { "command": "nmem-mcp" }
  }
}
OpenClaw (Plugin)
pip install neural-memory && npm install -g neuralmemory

Set memory slot in ~/.openclaw/openclaw.json:

{ "plugins": { "slots": { "memory": "neuralmemory" } } }
Upgrade to Pro

Already using Neural Memory? Just activate your key:

nmem pro activate YOUR_LICENSE_KEY    # activate license

Then enable InfinityDB (semantic search engine):

# ~/.neuralmemory/config.toml
storage_backend = "infinitydb"

Restart your MCP server. Existing memories are auto-migrated from SQLite to InfinityDB on first startup.

Get a license → · Pro quickstart →

Installation extras
pip install neural-memory[server]              # FastAPI server + dashboard
pip install neural-memory[extract]             # PDF/DOCX/PPTX/HTML/XLSX extraction
pip install neural-memory[nlp-vi]              # Vietnamese NLP
pip install neural-memory[embeddings]          # Local embedding models
pip install neural-memory[embeddings-openai]   # OpenAI embeddings
pip install neural-memory[all]                 # Everything
Benchmarks vs alternatives
Metric NeuralMemory Mem0 Cognee
Write 50 memories 1.2s 148.2s (121x slower) 290.6s (80x slower)
Read 20 queries 1.8s 2.9s 34.6s
API calls 0 70 149

Zero LLM calls, zero API cost. Full benchmarks →


Documentation

Guide Description
Quickstart Guide Interactive guide with animated demos
Pro Quickstart Get started with Pro features
CLI Reference All 66 CLI commands
MCP Tools Reference All 60 MCP tools with parameters
Cloud Sync Multi-device sync setup
Brain Health Guide Understanding and improving brain health
Embedding Setup Configure embedding providers
Architecture Technical design deep-dive

Development

git clone https://github.com/nhadaututtheky/neural-memory
cd neural-memory && pip install -e ".[dev]"
pytest tests/ -v          # 7000+ tests
ruff check src/ tests/    # Lint

See CONTRIBUTING.md for guidelines.

Support

If Neural Memory helps your AI agent remember, please consider giving it a star — it helps others discover the project and keeps development going.

Star on GitHub

You can also sponsor the project.

License

MIT — see LICENSE.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neural_memory-4.51.1.tar.gz (1.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neural_memory-4.51.1-py3-none-any.whl (2.1 MB view details)

Uploaded Python 3

File details

Details for the file neural_memory-4.51.1.tar.gz.

File metadata

  • Download URL: neural_memory-4.51.1.tar.gz
  • Upload date:
  • Size: 1.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for neural_memory-4.51.1.tar.gz
Algorithm Hash digest
SHA256 f113a2ac78b89c8bff95fa90a709cb7a9d32b4d7552f6ccbddaa88ab22a4726f
MD5 2f114f098821767b941bac3174c09c45
BLAKE2b-256 f26a358ec9da96f2035c6d3231ff1ebc79c3f81feaedb5f309c6e989b6a7c3fd

See more details on using hashes here.

Provenance

The following attestation bundles were made for neural_memory-4.51.1.tar.gz:

Publisher: release.yml on nhadaututtheky/neural-memory

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file neural_memory-4.51.1-py3-none-any.whl.

File metadata

  • Download URL: neural_memory-4.51.1-py3-none-any.whl
  • Upload date:
  • Size: 2.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for neural_memory-4.51.1-py3-none-any.whl
Algorithm Hash digest
SHA256 829cbd4a82b926c31776e9c35946ca68b6937b3f8d30f5fcfe2b28845ce647a3
MD5 d5add5a89b23d71c882586293577e6a4
BLAKE2b-256 42f5c32cd8137c78ea91ada1a555b0c711897ebba1b4c233bf61a601149f9f70

See more details on using hashes here.

Provenance

The following attestation bundles were made for neural_memory-4.51.1-py3-none-any.whl:

Publisher: release.yml on nhadaututtheky/neural-memory

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page