Skip to main content

Repo memory for AI agents — every memory has a source, every source gets verified.

Project description

agentic-memory (memcite)

PyPI CI License: MIT

Open-source repo memory for AI agents — every memory has a source, every source gets verified.

Package name on PyPI: memcite

Designed for coding agents, code review agents, and CLI tools that work on a single repository at a time.

Why

AI agents forget everything between sessions. Existing memory layers (mem0, Zep, LangMem) store text in vector DBs but can't tell you where that knowledge came from or whether it's still true.

agentic-memory enforces a simple rule: No evidence, no memory.

Every memory must cite its source (file path + line number, git commit, URL). Before an agent uses a memory, the citation is automatically re-validated. Stale memories get flagged, not silently served.

How it works

from agentic_memory import Memory, FileRef

mem = Memory("./my-project")

# Store a memory — citation is required
mem.add(
    "This project uses ruff for linting with line-length=120",
    evidence=FileRef("pyproject.toml", lines=(15, 20)),
)

# Query — returns answer + citation status
result = mem.query("What linter does this project use?")
print(result.answer)     # "ruff with line-length=120"
print(result.citations)  # [FileRef("pyproject.toml", L15-20, status=VALID)]

# Validate all memories — find what's gone stale
stale = mem.validate()
# [StaleMemory("ruff config", reason="file content changed at L15")]

Design Principles

  1. No Evidence, No Memoryadd() without a citation raises an error
  2. Validate Before Usequery() re-checks citations by default
  3. Decay What's Stale — confidence drops when evidence changes; invalid memories are deprioritized

Evidence Types

Type What it tracks Validation method
FileRef File path + line range Check file exists, content matches
GitCommitRef Commit SHA + file Verify commit exists in history
URLRef Web URL HTTP HEAD check + content hash
ManualRef Human-provided note No auto-validation (always trusted)

Features

  • Repo-scoped — each repository gets its own memory namespace
  • Local-first — SQLite storage, no external services required
  • Citation-backed — every memory traces back to a verifiable source
  • Auto-validation — stale evidence is detected before it misleads your agent
  • Confidence scoring — memories with invalid citations get deprioritized
  • Copilot-inspired design — repository-scoped memories with evidence and decay, inspired by GitHub's agentic memory architecture
  • CLI includedam add, am query, am validate, am status

Installation

Status: Alpha but usable — core features are stable, API may evolve.

pip install memcite

With extras:

pip install memcite[mcp]     # MCP server for Claude Code
pip install memcite[api]     # REST API server (FastAPI)

CLI Usage

# Add a memory with file evidence
am add "Uses pytest for testing" --file tests/conftest.py --lines 1-10

# Query memories
am query "What test framework?"

# Validate all memories
am validate

# Show memory status
am status

MCP Server

Add to your .mcp.json to use with Claude Code:

{
  "mcpServers": {
    "agentic-memory": {
      "command": "am-mcp",
      "args": ["--repo", "/path/to/your/project"]
    }
  }
}

Tools: memory_add, memory_query, memory_validate, memory_status, memory_list, memory_delete

REST API

am-server --repo /path/to/repo --port 8080

OpenAPI docs at http://localhost:8080/docs. Endpoints:

Method Path Description
POST /memories Add a memory with evidence
POST /memories/query Hybrid search + citation validation
GET /memories List all memories
GET /memories/{id} Get a specific memory
DELETE /memories/{id} Delete a memory
POST /memories/validate Validate all citations
GET /status Memory status summary

Hybrid Search

When initialized with an embedding provider, queries combine FTS5 full-text search with vector similarity:

from agentic_memory import Memory, TFIDFEmbedding, FileRef

mem = Memory("./my-project", embedding=TFIDFEmbedding())
mem.add("Uses ruff for code formatting", evidence=FileRef("pyproject.toml", lines=(1, 5)))

# Finds the memory even though "linting" != "formatting"
result = mem.query("What linter does this project use?")

Default weights: FTS5 (0.65) + Vector (0.35). Customize per query:

result = mem.query("linting", fts_weight=0.5, vector_weight=0.5)

Admission Control

Filter out low-value memories before they're stored:

from agentic_memory import Memory, HeuristicAdmissionController

mem = Memory("./my-project", admission=HeuristicAdmissionController())
mem.add("ok", evidence=ManualRef("chat"))  # raises ValueError — too vague

Or use LLM-based scoring with any OpenAI-compatible API:

from agentic_memory import LLMAdmissionController

def my_llm(system: str, user: str) -> str:
    # Call your LLM here, return JSON: {"score": 0.0-1.0, "reason": "..."}
    ...

mem = Memory("./my-project", admission=LLMAdmissionController(llm_callable=my_llm))

Real-world Workflows

PR reviewer agent — remember repo conventions and enforce them automatically:

mem.add(
    "Logging must use structlog, not stdlib logging",
    evidence=FileRef("docs/conventions.md", lines=(10, 15)),
)

# In your review pipeline
result = mem.query("What logging library should this project use?")
# → "structlog" with citation pointing to docs/conventions.md

Coding agent — look up project config with verifiable sources:

result = mem.query("What env vars does this service need?")
# → Returns memories citing .env.example with current validation status
# If .env.example was deleted or changed, the memory is flagged as STALE

CI pipeline — catch drifted knowledge before it causes damage:

# Add to your CI workflow
am validate --exit-code  # exits non-zero if any memory is INVALID

Roadmap

  • Core SDK — add / query / validate with citation enforcement
  • CLI tool
  • MCP Server — use with Claude Code and other MCP clients
  • Admission control — LLM-based scoring to filter low-value memories
  • Hybrid search — FTS5 + TF-IDF vector fusion, pluggable embedding providers
  • REST API server — FastAPI with OpenAPI docs
  • GitHub App / GitLab integration (webhook + comment bot)
  • LangChain / LlamaIndex integration
  • Web dashboard

Compared to

mem0 Zep LangMem agentic-memory
Vector search Yes Yes Yes Yes
Forced citations No No No Yes
Source validation No No No Yes
Staleness detection No No No Yes
Repo-scoped No No No Yes
Self-hosted Yes Yes Yes Yes

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memcite-0.4.0.tar.gz (37.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memcite-0.4.0-py3-none-any.whl (31.5 kB view details)

Uploaded Python 3

File details

Details for the file memcite-0.4.0.tar.gz.

File metadata

  • Download URL: memcite-0.4.0.tar.gz
  • Upload date:
  • Size: 37.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for memcite-0.4.0.tar.gz
Algorithm Hash digest
SHA256 868be643aacc7524e46e445d66439ddef5ac6a7a32d6b5c4006cb02cf213b0e6
MD5 fd59025c19f3c8cf2aa3e6588ed37901
BLAKE2b-256 f29792b41378d4c0f14456586657c031e8f53714bc475bd6dbd3b339c0cf170e

See more details on using hashes here.

File details

Details for the file memcite-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: memcite-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 31.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for memcite-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d9097baec1270007dbf662880b388e88753b09ba520aca1f597759d9c9ea1f9a
MD5 d9d38e84947336542be18ef28c0771ea
BLAKE2b-256 271ca57bc0ba3e94088de3b6f6947526efc6574d803356256ce3b4cabf36164f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page