Skip to main content

Brain-inspired personal memory layer for AI tools

Project description

ANAMNE

Local-first, brain-inspired memory for Claude, Cursor, ChatGPT, and any MCP-compatible AI tool. Your AI tools remember what you told them — across sessions, models, and machines.

PyPI License: MIT Python 3.12+ MCP Compatible CI

About this project: Personal open-source project, released under the MIT license. Not a commercial product, not for sale, not seeking compensation. Bug reports and PRs are welcome; support is best-effort and provided on the maintainer's own time. No service-level agreement is implied.


What this is

AI tools forget you between sessions. Every new chat starts from zero — re-explaining what you're building, what you've decided, what your preferences are.

ANAMNE is a local memory layer that any MCP-compatible AI can read from and write to. You tell it things once. Every Claude / Cursor / ChatGPT session after that has access through the MCP protocol.

pip install anamne
anamne init
# Tell it something once
anamne remember "we use Postgres because we need concurrent writes"
anamne journal  "Fixed Stripe webhook double-fire: idempotency key was wrong"

# Ask it later (or have your AI do it via MCP)
anamne ask "why did we pick our database?"

That's the loop. Everything else is variations on capture and recall.


Why "brain-inspired"

ANAMNE is built around three memory layers from the LIGHT framework and the Agent Cognitive Compressor:

Layer What it stores Decay
Episodic Git commits, ADRs, architectural decisions Bi-temporal (valid_until)
Scratchpad Durable facts you wrote down ACT-R activation (recency × frequency)
Working Active session context, reminders TTL (auto-expires)

When you ask a question, all three layers are searched in parallel. Top results are combined and cited back. Lower-ranked results are compressed into a single summary before being sent to the LLM — the ACC paper's idea of bounded compressed state.

The framing is a useful metaphor grounded in real research, not a neuroscience claim.


Setup

pip install anamne
anamne init

The wizard picks a model based on which API key it finds:

Model How to enable Cost Quality
Gemini 2.5 Flash Lite GEMINI_API_KEY=... in .env Free tier Good
Claude Sonnet 4.6 ANTHROPIC_API_KEY=... in .env ~$0.003/commit Best

Data lives in ~/.anamne/ (SQLite + ChromaDB). Nothing leaves your machine.

From source: git clone https://github.com/venumittapalli576/anamne && pip install -e .


The MCP integration (why this matters)

Once anamne init runs, the same memory is available to every AI tool that speaks MCP. Add ANAMNE to your client config and Claude / Cursor / Cline can call remember, ask_why, search_facts, and 18 other tools directly — no copy-paste, no context windows to refill.

anamne mcp-config           # print Claude Code config snippet
anamne mcp-config --apply   # write it into ~/.claude.json directly
anamne mcp-config --client cursor

The result: when you open Claude on Monday, it already knows what you decided on Friday. Across machines if you sync ~/.anamne/.

MCP troubleshooting

"My MCP client connected but no anamne tools appear." The MCP server boots as a subprocess of the host (Claude/Cursor/Cline). Use anamne --version to confirm at least v1.0.2 is on your PATH; earlier versions refused to start without an API key. Then run anamne tools — if it lists 21 tools, the surface is healthy and the issue is on the client side (restart it).

"anamne resolves to an old version." Run pip install --upgrade anamne (or pip install -e . from a clone). The path that ends up in ~/.claude.json is whatever anamne.exe resolves to at the time you ran mcp-config --apply — it doesn't update automatically when you upgrade.

"Episodic recall doesn't return anything." Run anamne status and check Episodic decisions. Zero means you haven't indexed a repo yet — run anamne index <path-to-your-repo>.

"How do I verify the integration end-to-end without restarting Claude?" Run pytest tests/test_mcp_integration.py -v against a clone of the repo. It spawns anamne mcp-server as a subprocess and does a real MCP initialize + tools/list handshake. Same code path Claude uses.


The five commands you actually need

anamne remember "..."         # capture a durable fact
anamne journal  "..."         # timestamped capture (auto-tagged)
anamne ask      "..."         # cross-layer recall with citations (uses LLM)
anamne search   "..."         # fast no-LLM search of scratchpad
anamne mcp-server             # hand the memory to Claude / Cursor / Cline

Everything else is convenience on top of these.


Full command surface

The v1.0 stable surface is documented in STABLE.md. Run anamne --help for the menu, or anamne <command> --help for any specific one. Highlights:

Capture: remember, journal, import-web, import-chat, capture-clipboard, working

Recall: ask, search, search-working

Manage: facts, info, edit, tag, pin/unpin, forget, prune, consolidate, dedupe, clear, forget-tag, tag-rename

Episodic: index <repo>, sync <repo>, watch

Inspect: tags, status, stats, history, doctor

Backup: export, import-memory, backup

Interfaces: mcp-server, mcp-config, tools, shell, ui


A 60-second tour

# Capture
anamne remember "I always use pytest, not unittest" --tag python --tag testing
anamne remember "we deploy on Fridays only" --auto-tag

# Bulk import
anamne import-web https://12factor.net
anamne import-chat ~/Downloads/claude-conversation.json

# Index a repo to capture WHY decisions from git history
anamne index ./my-project

# Recall
anamne ask "what's our deploy policy?"
anamne search postgres
anamne facts --tag python

# Browse everything in your browser
anamne ui

Honest limitations

  • Output quality depends on what you capture. Vague memories give vague answers.
  • Indexing a large repo costs a few dollars on paid APIs (free on Gemini within rate limits).
  • MCP integration only works in MCP-aware editors (Claude Code, Cursor, Cline, a few others).
  • This is a personal project. Bug reports may sit for a while. Not production infrastructure.
  • The "brain-inspired" framing is a metaphor. It's grounded in real cognitive-architecture research (ACT-R, LIGHT, ACC) but it isn't a neuroscience claim.

Why not Mem0 / Supermemory / MemGPT?

Those tools are SDKs for app developers — they require their backend and target SaaS builders. ANAMNE is for individuals who use AI tools daily.

ANAMNE Mem0 / Supermemory
Where the data lives Your machine Their backend
Hosting required None (SQLite + ChromaDB) Yes
MCP-native Yes (21 tools) No
Target user Individual humans SaaS builders
Open source MIT Various

If you want a memory layer for your end-product, use Mem0 or Supermemory. If you want a memory layer for yourself, use ANAMNE.


Research grounding

  • LIGHT (arXiv 2510.27246) — three-layer memory framework with layer-priority conflict resolution
  • ACT-R (Anderson & Lebiere 1998) — A_i = ln(Σ t_j^-d) decay formula; every retrieval is timestamped in retrieval_log
  • Agent Cognitive Compressor (arXiv 2601.11653) — bounded compressed state: top-K verbatim, tail compressed
  • Hippocampal indexing theory — long-term storage as compressed patterns
  • Lore protocol (arXiv 2603.15566) — git as a knowledge graph

License

MIT. Open source. Bring your own key. Zero telemetry.


Maintainer notes

Pushing a vX.Y.Z tag triggers PyPI publish via Trusted Publishing:

git tag v1.0.0 && git push origin v1.0.0

One-time setup: add a Trusted Publisher at https://pypi.org/manage/account/publishing/ — repo venumittapalli576/anamne, workflow publish.yml, environment pypi.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

anamne-1.0.7.tar.gz (146.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

anamne-1.0.7-py3-none-any.whl (106.0 kB view details)

Uploaded Python 3

File details

Details for the file anamne-1.0.7.tar.gz.

File metadata

  • Download URL: anamne-1.0.7.tar.gz
  • Upload date:
  • Size: 146.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for anamne-1.0.7.tar.gz
Algorithm Hash digest
SHA256 8d4773505597ad1d77a1a266486f6dfc19f54c4805bfa8180992a09d3ec018ae
MD5 39c33f8490d143cc561557ead474e8f8
BLAKE2b-256 ff35231352581eaa7111c65c3a03f46c856d43ca879a463db0181ef516acc4cc

See more details on using hashes here.

Provenance

The following attestation bundles were made for anamne-1.0.7.tar.gz:

Publisher: publish.yml on venumittapalli576/anamne

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file anamne-1.0.7-py3-none-any.whl.

File metadata

  • Download URL: anamne-1.0.7-py3-none-any.whl
  • Upload date:
  • Size: 106.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for anamne-1.0.7-py3-none-any.whl
Algorithm Hash digest
SHA256 f50ead319fadf56a02bd4693f4f1d297ad0bf3d44d0eaeed9bb7ef86d7bc2a08
MD5 cda5987c52906035870e7ea32c19adc9
BLAKE2b-256 7632275b9fcb3b84eb4681e5adb51f6f6f1433020ac0308abc4e9b2d35d22922

See more details on using hashes here.

Provenance

The following attestation bundles were made for anamne-1.0.7-py3-none-any.whl:

Publisher: publish.yml on venumittapalli576/anamne

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page