Skip to main content

Repo memory for AI agents — every memory has a source, every source gets verified.

Project description

agentic-memory (memcite)

PyPI CI License: MIT

Open-source repo memory for AI agents — every memory has a source, every source gets verified.

Why

AI agents forget everything between sessions. Existing memory layers (mem0, Zep, LangMem) store text in vector DBs but can't tell you where that knowledge came from or whether it's still true.

agentic-memory enforces a simple rule: No evidence, no memory.

Every memory must cite its source (file path + line number, git commit, URL). Before an agent uses a memory, the citation is automatically re-validated. Stale memories get flagged, not silently served.

How it works

from agentic_memory import Memory, FileRef

mem = Memory("./my-project")

# Store a memory — citation is required
mem.add(
    "This project uses ruff for linting with line-length=120",
    evidence=FileRef("pyproject.toml", lines=(15, 20)),
)

# Query — returns answer + citation status
result = mem.query("What linter does this project use?")
print(result.answer)     # "ruff with line-length=120"
print(result.citations)  # [FileRef("pyproject.toml", L15-20, status=VALID)]

# Validate all memories — find what's gone stale
stale = mem.validate()
# [StaleMemory("ruff config", reason="file content changed at L15")]

Design Principles

  1. No Evidence, No Memoryadd() without a citation raises an error
  2. Validate Before Usequery() re-checks citations by default
  3. Decay What's Stale — confidence drops when evidence changes; invalid memories are deprioritized

Evidence Types

Type What it tracks Validation method
FileRef File path + line range Check file exists, content matches
GitCommitRef Commit SHA + file Verify commit exists in history
URLRef Web URL HTTP HEAD check + content hash
ManualRef Human-provided note No auto-validation (always trusted)

Features

  • Repo-scoped — each repository gets its own memory namespace
  • Local-first — SQLite storage, no external services required
  • Citation-backed — every memory traces back to a verifiable source
  • Auto-validation — stale evidence is detected before it misleads your agent
  • Confidence scoring — memories with invalid citations get deprioritized
  • CLI includedam add, am query, am validate, am status

Installation

pip install memcite

With extras:

pip install memcite[mcp]     # MCP server for Claude Code
pip install memcite[api]     # REST API server (FastAPI)

CLI Usage

# Add a memory with file evidence
am add "Uses pytest for testing" --file tests/conftest.py --lines 1-10

# Query memories
am query "What test framework?"

# Validate all memories
am validate

# Show memory status
am status

MCP Server

Add to your .mcp.json to use with Claude Code:

{
  "mcpServers": {
    "agentic-memory": {
      "command": "am-mcp",
      "args": ["--repo", "/path/to/your/project"]
    }
  }
}

Tools: memory_add, memory_query, memory_validate, memory_status, memory_list, memory_delete

REST API

am-server --repo /path/to/repo --port 8080

OpenAPI docs at http://localhost:8080/docs. Endpoints:

Method Path Description
POST /memories Add a memory with evidence
POST /memories/query Hybrid search + citation validation
GET /memories List all memories
GET /memories/{id} Get a specific memory
DELETE /memories/{id} Delete a memory
POST /memories/validate Validate all citations
GET /status Memory status summary

Hybrid Search

When initialized with an embedding provider, queries combine FTS5 full-text search with vector similarity:

from agentic_memory import Memory, TFIDFEmbedding, FileRef

mem = Memory("./my-project", embedding=TFIDFEmbedding())
mem.add("Uses ruff for code formatting", evidence=FileRef("pyproject.toml", lines=(1, 5)))

# Finds the memory even though "linting" != "formatting"
result = mem.query("What linter does this project use?")

Default weights: FTS5 (0.65) + Vector (0.35). Customize per query:

result = mem.query("linting", fts_weight=0.5, vector_weight=0.5)

Admission Control

Filter out low-value memories before they're stored:

from agentic_memory import Memory, HeuristicAdmissionController

mem = Memory("./my-project", admission=HeuristicAdmissionController())
mem.add("ok", evidence=ManualRef("chat"))  # raises ValueError — too vague

Or use LLM-based scoring with any OpenAI-compatible API:

from agentic_memory import LLMAdmissionController

def my_llm(system: str, user: str) -> str:
    # Call your LLM here, return JSON: {"score": 0.0-1.0, "reason": "..."}
    ...

mem = Memory("./my-project", admission=LLMAdmissionController(llm_callable=my_llm))

Roadmap

  • Core SDK — add / query / validate with citation enforcement
  • CLI tool
  • MCP Server — use with Claude Code and other MCP clients
  • Admission control — LLM-based scoring to filter low-value memories
  • Hybrid search — FTS5 + TF-IDF vector fusion, pluggable embedding providers
  • REST API server — FastAPI with OpenAPI docs
  • LangChain / LlamaIndex integration
  • Web dashboard

Compared to

mem0 Zep LangMem agentic-memory
Vector search Yes Yes Yes Yes
Forced citations No No No Yes
Source validation No No No Yes
Staleness detection No No No Yes
Repo-scoped No No No Yes
Self-hosted Yes Yes Yes Yes

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memcite-0.2.0.tar.gz (24.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memcite-0.2.0-py3-none-any.whl (25.3 kB view details)

Uploaded Python 3

File details

Details for the file memcite-0.2.0.tar.gz.

File metadata

  • Download URL: memcite-0.2.0.tar.gz
  • Upload date:
  • Size: 24.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for memcite-0.2.0.tar.gz
Algorithm Hash digest
SHA256 3d80b0f09888ef180ad2e5a223dd836dc45ef69c74fc6e4e344189d575e31a85
MD5 d6332dbb1bb93527201d80db5a449c19
BLAKE2b-256 478bd4b0678dfc6bca7e1b7e1fc87ee03987a0be23cc0f7480a51c40f0127e26

See more details on using hashes here.

File details

Details for the file memcite-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: memcite-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 25.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for memcite-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 39f4c42eb9710d03c17d2c3d159842e953362d19e7c7ba3b1841bf6f92792c81
MD5 3bd1b450c558c32ae87efb50402f1a13
BLAKE2b-256 cc6af973d2ca545a08fc2f3f8c0420edfa0fd112e6d4d0e33acba6b0417d89ae

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page