Skip to main content

Open-source memory framework for AI agents

Project description

Astrocyte

Astrocyte

Open-source memory framework for AI agents. Retain, recall, and synthesize — with pluggable backends, built-in governance, and 18 framework integrations.

PyPI Python License Docs


What is Astrocyte?

Astrocyte gives AI agents persistent memory — store what matters, retrieve what's relevant, synthesize answers from accumulated knowledge. It sits between your agents and their storage, providing:

  • Three operations: retain(), recall(), reflect() — one API for every agent and every backend
  • Pluggable backends: Tier 1 storage (pgvector, Pinecone, Qdrant, Neo4j) or Tier 2 engines (Mystique, Mem0, Zep, Letta)
  • Built-in governance: PII scanning, rate limits, token budgets, circuit breakers, access control, observability
  • 18 framework integrations: LangGraph, CrewAI, OpenAI, Claude Agent SDK, Google ADK, AutoGen, and more
  • MCP server: Any MCP-capable agent (Claude Code, Cursor, Windsurf) gets memory with zero code

Quick start

pip install astrocyte
from astrocyte import Astrocyte

brain = Astrocyte.from_config("astrocyte.yaml")

# Store a memory
await brain.retain("Calvin prefers dark mode", bank_id="user-123")

# Recall relevant memories
hits = await brain.recall("What are Calvin's preferences?", bank_id="user-123")

# Synthesize an answer from memory
result = await brain.reflect("Summarize what we know about Calvin", bank_id="user-123")

Agent framework integrations

Astrocyte works with every major agent framework through thin middleware — one integration per framework, works with every memory backend.

Framework Module
LangGraph / LangChain astrocyte.integrations.langgraph
CrewAI astrocyte.integrations.crewai
OpenAI Agents SDK astrocyte.integrations.openai_agents
Claude Agent SDK astrocyte.integrations.claude_agent_sdk
Google ADK astrocyte.integrations.google_adk
Pydantic AI astrocyte.integrations.pydantic_ai
AutoGen / AG2 astrocyte.integrations.autogen
Smolagents (HuggingFace) astrocyte.integrations.smolagents
LlamaIndex astrocyte.integrations.llamaindex
Semantic Kernel astrocyte.integrations.semantic_kernel
DSPy astrocyte.integrations.dspy
CAMEL-AI astrocyte.integrations.camel_ai
BeeAI (IBM) astrocyte.integrations.beeai
Strands Agents (AWS) astrocyte.integrations.strands
LiveKit Agents astrocyte.integrations.livekit
Haystack (deepset) astrocyte.integrations.haystack
Microsoft Agent Framework astrocyte.integrations.microsoft_agent
MCP (Claude Code, Cursor) astrocyte.mcp

MCP server

Any MCP-capable agent gets memory with zero code integration:

{
  "mcpServers": {
    "memory": {
      "command": "astrocyte-mcp",
      "args": ["--config", "astrocyte.yaml"]
    }
  }
}

Built-in governance

Neuroscience-inspired policies that protect every operation — regardless of backend:

  • PII barriers — regex, NER, or LLM-based scanning with redact/reject/warn actions
  • Rate limits & quotas — token bucket rate limiting and daily quotas per bank
  • Circuit breakers — automatic degraded mode when backends go down
  • Access control — per-bank read/write/forget/admin permissions
  • Observability — OpenTelemetry spans and Prometheus metrics on every operation
  • Data governance — classification levels, compliance profiles (GDPR, HIPAA, PDPA)

Multi-bank orchestration

Query across personal, team, and org banks with cascade, parallel, or first-match strategies:

hits = await brain.recall(
    "What are Calvin's preferences and team policies?",
    banks=["personal", "team", "org"],
    strategy="cascade",
)

Memory portability

Export and import memories between providers — no vendor lock-in:

await brain.export_bank("user-123", "./backup.ama.jsonl")
await brain.import_bank("user-123", "./backup.ama.jsonl")

Evaluation

Benchmark memory quality with built-in suites and DeepEval LLM-as-judge:

from astrocyte.eval import MemoryEvaluator

evaluator = MemoryEvaluator(brain)
results = await evaluator.run_suite("basic", bank_id="eval-bank")
print(f"Recall precision: {results.metrics.recall_precision:.2%}")

Benchmarks

Astrocyte includes adapters for two academic memory benchmarks plus built-in eval suites.

Benchmark What it tests Dataset
LoCoMo (ECAI 2025) Long-term conversational memory — single-hop, multi-hop, temporal, open-domain QA snap-research/locomo
LongMemEval Long-context memory extraction, reasoning, temporal ordering xiaowu0162/LongMemEval
Built-in suites basic (quick validation) and accuracy (retrieval quality with ground truth) Included

Quick start

# Smoke test — no API key needed, in-memory providers
make bench-smoke

# With real LLM providers (requires OPENAI_API_KEY)
export OPENAI_API_KEY=sk-...

# Datasets are fetched automatically on first run
make bench-locomo-quick       # LoCoMo, 50 questions (~2-3 min)
make bench-locomo             # LoCoMo, full dataset (~30-60 min)
make bench-longmemeval        # LongMemEval
make bench-builtin            # Built-in suites only
make bench                    # All benchmarks
make bench-gate               # Check latest results against release gates

LLM adapter comparison

Compare the built-in OpenAI provider against the LiteLLM adapter (same models, isolates the adapter as the variable):

pip install astrocyte-llm-litellm   # or: uv pip install -e ../adapters-llm-py/astrocyte-llm-litellm
make bench-compare                  # Runs 50 LoCoMo questions through each

Results are written to benchmark-results/openai/latest.json and benchmark-results/litellm/latest.json.

Release gates

Use make bench-gate after a benchmark run to enforce the Hindsight-informed release thresholds in benchmarks/gates-hindsight-informed.json. These gates check minimum quality plus p95 retain/recall latency before making external parity claims.

Dataset management

Datasets are cloned to datasets/ (gitignored) on first benchmark run. To manage manually:

make fetch-datasets       # Fetch all datasets
make clean-datasets       # Remove downloaded datasets
make clean-results        # Remove benchmark results

See docs/_design/evaluation.md for the full evaluation specification.

Development

From astrocyte-py/ with the dev extra:

uv sync --extra dev
uv run ruff check astrocyte/ tests/
uv run python -m pytest tests/ -x --tb=short

Git hooks (Ruff via pre-commit) — reduces CI lint failures. From the repository root (parent of astrocyte-py/):

uv sync --extra dev --directory astrocyte-py
uv run --project astrocyte-py pre-commit install

Hooks run automatically on git commit; run on all files anytime with:

uv run --project astrocyte-py pre-commit run --all-files

CodeQL is not run in pre-commit (too slow for every commit). Enable Code scanning under the repo’s Settings → Code security so GitHub runs CodeQL on pushes/PRs using the default or advanced setup.

Documentation

astrocyteai.github.io/astrocyte

License

Apache 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

astrocyte-0.13.1.tar.gz (900.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

astrocyte-0.13.1-py3-none-any.whl (472.6 kB view details)

Uploaded Python 3

File details

Details for the file astrocyte-0.13.1.tar.gz.

File metadata

  • Download URL: astrocyte-0.13.1.tar.gz
  • Upload date:
  • Size: 900.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for astrocyte-0.13.1.tar.gz
Algorithm Hash digest
SHA256 aacde8f001ba81dfa700232da1de91e9f6b81612e75176e393e97c95f50b5a64
MD5 75552aab48e83d36b25b9ba1a627057d
BLAKE2b-256 e7496d631f6a5eb37c98d4dff17952b5568b0fbf7beee03236c1589d5a3ac9aa

See more details on using hashes here.

File details

Details for the file astrocyte-0.13.1-py3-none-any.whl.

File metadata

  • Download URL: astrocyte-0.13.1-py3-none-any.whl
  • Upload date:
  • Size: 472.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for astrocyte-0.13.1-py3-none-any.whl
Algorithm Hash digest
SHA256 835604e1c2ef4942a1568badb8835c25bd1564f7f74b5824c802dd41bc4c6940
MD5 5161de0e84cf63961b2c26edee849c2a
BLAKE2b-256 e9f6a7d6021862fb4d4920592b4320116e55c9ef87437ec8dcf84acb6f1bb840

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page