Skip to main content

Open-source memory framework for AI agents

Project description

Astrocyte

Astrocyte

Open-source memory framework for AI agents. Retain, recall, and synthesize — with pluggable backends, built-in governance, and 18 framework integrations.

PyPI Python License Docs


What is Astrocyte?

Astrocyte gives AI agents persistent memory — store what matters, retrieve what's relevant, synthesize answers from accumulated knowledge. It sits between your agents and their storage, providing:

  • Three operations: retain(), recall(), reflect() — one API for every agent and every backend
  • Pluggable backends: Tier 1 storage (pgvector, Pinecone, Qdrant, Neo4j) or Tier 2 engines (Mystique, Mem0, Zep, Letta)
  • Built-in governance: PII scanning, rate limits, token budgets, circuit breakers, access control, observability
  • 18 framework integrations: LangGraph, CrewAI, OpenAI, Claude Agent SDK, Google ADK, AutoGen, and more
  • MCP server: Any MCP-capable agent (Claude Code, Cursor, Windsurf) gets memory with zero code

Quick start

pip install astrocyte
from astrocyte import Astrocyte

brain = Astrocyte.from_config("astrocyte.yaml")

# Store a memory
await brain.retain("Calvin prefers dark mode", bank_id="user-123")

# Recall relevant memories
hits = await brain.recall("What are Calvin's preferences?", bank_id="user-123")

# Synthesize an answer from memory
result = await brain.reflect("Summarize what we know about Calvin", bank_id="user-123")

Agent framework integrations

Astrocyte works with every major agent framework through thin middleware — one integration per framework, works with every memory backend.

Framework Module
LangGraph / LangChain astrocyte.integrations.langgraph
CrewAI astrocyte.integrations.crewai
OpenAI Agents SDK astrocyte.integrations.openai_agents
Claude Agent SDK astrocyte.integrations.claude_agent_sdk
Google ADK astrocyte.integrations.google_adk
Pydantic AI astrocyte.integrations.pydantic_ai
AutoGen / AG2 astrocyte.integrations.autogen
Smolagents (HuggingFace) astrocyte.integrations.smolagents
LlamaIndex astrocyte.integrations.llamaindex
Semantic Kernel astrocyte.integrations.semantic_kernel
DSPy astrocyte.integrations.dspy
CAMEL-AI astrocyte.integrations.camel_ai
BeeAI (IBM) astrocyte.integrations.beeai
Strands Agents (AWS) astrocyte.integrations.strands
LiveKit Agents astrocyte.integrations.livekit
Haystack (deepset) astrocyte.integrations.haystack
Microsoft Agent Framework astrocyte.integrations.microsoft_agent
MCP (Claude Code, Cursor) astrocyte.mcp

MCP server

Any MCP-capable agent gets memory with zero code integration:

{
  "mcpServers": {
    "memory": {
      "command": "astrocyte-mcp",
      "args": ["--config", "astrocyte.yaml"]
    }
  }
}

Built-in governance

Neuroscience-inspired policies that protect every operation — regardless of backend:

  • PII barriers — regex, NER, or LLM-based scanning with redact/reject/warn actions
  • Rate limits & quotas — token bucket rate limiting and daily quotas per bank
  • Circuit breakers — automatic degraded mode when backends go down
  • Access control — per-bank read/write/forget/admin permissions
  • Observability — OpenTelemetry spans and Prometheus metrics on every operation
  • Data governance — classification levels, compliance profiles (GDPR, HIPAA, PDPA)

Multi-bank orchestration

Query across personal, team, and org banks with cascade, parallel, or first-match strategies:

hits = await brain.recall(
    "What are Calvin's preferences and team policies?",
    banks=["personal", "team", "org"],
    strategy="cascade",
)

Memory portability

Export and import memories between providers — no vendor lock-in:

await brain.export_bank("user-123", "./backup.ama.jsonl")
await brain.import_bank("user-123", "./backup.ama.jsonl")

Evaluation

Benchmark memory quality with built-in suites and DeepEval LLM-as-judge:

from astrocyte.eval import MemoryEvaluator

evaluator = MemoryEvaluator(brain)
results = await evaluator.run_suite("basic", bank_id="eval-bank")
print(f"Recall precision: {results.metrics.recall_precision:.2%}")

Benchmarks

Astrocyte includes adapters for two academic memory benchmarks plus built-in eval suites.

Benchmark What it tests Dataset
LoCoMo (ECAI 2025) Long-term conversational memory — single-hop, multi-hop, temporal, open-domain QA snap-research/locomo
LongMemEval Long-context memory extraction, reasoning, temporal ordering xiaowu0162/LongMemEval
Built-in suites basic (quick validation) and accuracy (retrieval quality with ground truth) Included

Quick start

# Smoke test — no API key needed, in-memory providers
make bench-smoke

# With real LLM providers (requires OPENAI_API_KEY)
export OPENAI_API_KEY=sk-...

# Datasets are fetched automatically on first run
make bench-locomo-quick       # LoCoMo, 50 questions (~2-3 min)
make bench-locomo             # LoCoMo, full dataset (~30-60 min)
make bench-longmemeval        # LongMemEval
make bench-builtin            # Built-in suites only
make bench                    # All benchmarks
make bench-gate               # Check latest results against release gates

LLM adapter comparison

Compare the built-in OpenAI provider against the LiteLLM adapter (same models, isolates the adapter as the variable):

pip install astrocyte-llm-litellm   # or: uv pip install -e ../adapters-llm-py/astrocyte-llm-litellm
make bench-compare                  # Runs 50 LoCoMo questions through each

Results are written to benchmark-results/openai/latest.json and benchmark-results/litellm/latest.json.

Release gates

Use make bench-gate after a benchmark run to enforce the Hindsight-informed release thresholds in benchmarks/gates-hindsight-informed.json. These gates check minimum quality plus p95 retain/recall latency before making external parity claims.

Dataset management

Datasets are cloned to datasets/ (gitignored) on first benchmark run. To manage manually:

make fetch-datasets       # Fetch all datasets
make clean-datasets       # Remove downloaded datasets
make clean-results        # Remove benchmark results

See docs/_design/evaluation.md for the full evaluation specification.

Development

From astrocyte-py/ with the dev extra:

uv sync --extra dev
uv run ruff check astrocyte/ tests/
uv run python -m pytest tests/ -x --tb=short

Git hooks (Ruff via pre-commit) — reduces CI lint failures. From the repository root (parent of astrocyte-py/):

uv sync --extra dev --directory astrocyte-py
uv run --project astrocyte-py pre-commit install

Hooks run automatically on git commit; run on all files anytime with:

uv run --project astrocyte-py pre-commit run --all-files

CodeQL is not run in pre-commit (too slow for every commit). Enable Code scanning under the repo’s Settings → Code security so GitHub runs CodeQL on pushes/PRs using the default or advanced setup.

Documentation

astrocyteai.github.io/astrocyte

License

Apache 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

astrocyte-0.10.0.tar.gz (847.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

astrocyte-0.10.0-py3-none-any.whl (440.0 kB view details)

Uploaded Python 3

File details

Details for the file astrocyte-0.10.0.tar.gz.

File metadata

  • Download URL: astrocyte-0.10.0.tar.gz
  • Upload date:
  • Size: 847.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for astrocyte-0.10.0.tar.gz
Algorithm Hash digest
SHA256 030665cd2d0d0a743e566ceed835f777fbf913318246ef1800e9774c8df870da
MD5 e14f5e635d7ac323bb2746a53092e98f
BLAKE2b-256 b1000a08a545b92c70440b8cbe7365e0eb96d4dfc740db89a35ebdcfa53ec751

See more details on using hashes here.

File details

Details for the file astrocyte-0.10.0-py3-none-any.whl.

File metadata

  • Download URL: astrocyte-0.10.0-py3-none-any.whl
  • Upload date:
  • Size: 440.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for astrocyte-0.10.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ddb5df719b62ba5c93aaafbb4c941624019068b6e7303a2e00fad4a4fe8c3831
MD5 9b61483f50bb3b90f6de088dfeecae0e
BLAKE2b-256 084ac26b5f7ea8244b4019155dc1a24360b651d3cd831722a323e6a45601d89c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page