Skip to main content

Bio-inspired episodic memory system for AI agents

Project description

Elo Memory

PyPI version Python 3.10+ License: MIT Tests

The memory brain for AI agents. 3 lines to give any LLM persistent memory that knows what to remember, what to forget, what changed, and what's missing.

from elo_memory import EloBrain

brain = EloBrain("user_123")
response = brain.think("I switched from Django to FastAPI", your_llm)
# Brain auto: recalled context, stored the fact, updated KB to "backend: FastAPI",
# superseded old Django memory, extracted causal link if reason given,
# detected knowledge gaps, suggested follow-up questions

Why Elo Memory?

Every other memory system stores text and searches it. Elo Memory understands it.

"Switched from Django to FastAPI because it was too slow"
System What it does
ChromaDB Stores the text. Retrieves it when query is similar. Returns old Django AND new FastAPI results with no way to know which is current.
Mem0 Calls GPT to extract facts ($0.005/store). Stores "backend: FastAPI". Slow (500ms), needs API key, sends your data to OpenAI.
Elo Memory Stores the episode. Updates KB to backend: FastAPI. Supersedes Django memory. Extracts causal link: "too slow → switched to FastAPI". Detects knowledge gaps. All locally, in 30ms, for $0.

Quick Start

pip install elo-memory

For AI agents (recommended)

from elo_memory import EloBrain

brain = EloBrain("sarah")

# Full agent loop: recall → prompt → LLM → store
response = brain.think(
    "I'm Sarah, senior engineer at Shopify. We use PostgreSQL.",
    llm_fn=lambda prompt: your_llm(prompt),
)

# What does the brain know?
state = brain.what_i_know()
# {
#   knowledge: {name: "Sarah", role: "senior engineer", company: "Shopify", database: "PostgreSQL"},
#   knowledge_gaps: [{topic: "infrastructure", missing: ["hosting", "ci/cd", "monitoring"]}],
#   suggestions: ["Consider asking about: hosting, ci/cd, monitoring"],
#   facts: [...], entities: {...}, causal_links: 0, decisions_tracked: 0
# }

What happens automatically on every turn

  1. Recalls relevant memories + KB facts (30ms)
  2. Stores the message (skips filler: "thanks", "ok", "what time?")
  3. Updates KB with structured facts extracted from text
  4. Detects conflicts — "switched from X to Y" supersedes old X memories
  5. Extracts entities — names, emails, dates, amounts
  6. Tracks causality — "because X" links cause to effect
  7. Detects knowledge gaps — knows what it DOESN'T know
  8. Suggests follow-ups — unresolved decisions, missing context
  9. Generates derived facts — "Switched to FastAPI" → also indexes "Currently using FastAPI"
  10. Filters noise — near-duplicates (>0.92 cosine) silently skipped

Direct memory access

from elo_memory import UserMemory

memory = UserMemory("sarah", persistence_path="./memories")

# Store
result = memory.store("My email is sarah@shopify.com")
# {stored: True, entities: {emails: ["sarah@shopify.com"], names: ["Sarah"]}}

# Recall
results = memory.recall("contact info?", k=7)
# [("My email is sarah@shopify.com", 0.47)]

# Profile
memory.get_profile()
# {user_id: "sarah", total_memories: 42, entities: {emails: [...], names: [...]}}

# Current facts only (superseded removed)
memory.get_facts()

What No Competitor Has

1. Knowledge Gap Detection

brain.think("I'm building a payment system with Stripe", llm)
state = brain.what_i_know()
state["knowledge_gaps"]
# [{topic: "payment", missing: ["currency", "volume", "compliance", "region"],
#   suggestion: "Consider asking about: currency, volume, compliance, region"}]

The brain knows what it doesn't know and tells the agent to ask.

2. Causal Reasoning

brain.think("Switched to FastAPI because Django was too slow for websockets", llm)
brain.prepare("Why did we change the backend?")
# System prompt includes:
# ## Reasons (causal links)
# - Django was too slow for websockets → Switched to FastAPI

3. Conflict Resolution (no LLM needed)

brain.think("I drive a BMW", llm)
brain.think("Just picked up my new Tesla yesterday", llm)
# BMW memory automatically superseded. "What car?" → Tesla only.

Works for explicit ("switched from X to Y") and implicit ("got a new X") contradictions.

4. Structured Knowledge Base

Every message automatically updates a structured KB. Queries hit the KB first (0ms), episodes as fallback.

brain._kb.get_all()
# {name: "Sarah", role: "senior engineer", company: "Shopify",
#  backend: "FastAPI", database: "PostgreSQL", team_size: "12"}

vs Alternatives

Elo Memory Mem0 ChromaDB Zep
Recall accuracy (24q test) 100% N/A ($) 83%
Store latency 30ms 500-2000ms 30ms ~100ms
Cost per 1000 ops $0 $0.50-20 $0 $99/mo
Works offline Yes No Yes No
Needs API key No Yes No Yes
Structured KB Yes Yes (LLM) No No
Conflict detection Yes Yes (LLM) No No
Knowledge gaps Yes No No No
Causal reasoning Yes No No No
Decision tracking Yes No No No

Installation

# Recommended (includes sentence-transformers)
pip install elo-memory

# Minimal (provide your own embeddings)
pip install elo-memory

# From source
git clone https://github.com/server-elo/elo-memory.git
cd elo-memory && pip install -e ".[dev]"

Running Tests

pytest tests/ -v

Architecture

User message
  │
  ├─→ Knowledge Base (instant structured facts)
  │     "backend: FastAPI", "team_size: 12"
  │
  ├─→ Episodic Store (narrative context)
  │     ChromaDB + temporal expansion
  │
  ├─→ Intelligence Layer
  │     Knowledge gaps, causal chains, decision tracking
  │
  ├─→ Conflict Detection
  │     Supersede old facts, implicit contradictions
  │
  └─→ Entity Extraction
        Names, emails, dates, amounts (regex, no LLM)

API Levels

Level Entry Point For
Newbie EloBrain 3 lines, automatic everything
Intermediate UserMemory Per-user isolation, sessions, profiles
Advanced EpisodicMemoryStore Full config control, raw components
Expert Individual engines Surprise, segmentation, forgetting, consolidation

Documentation

License

MIT — see LICENSE.

Acknowledgments

  • EM-LLM (ICLR 2025) — Research foundation
  • Itti & Baldi (2009) — Bayesian Surprise
  • Squire & Alvarez (1995) — Systems Consolidation

GitHub: https://github.com/server-elo/elo-memory PyPI: https://pypi.org/project/elo-memory/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

elo_memory-0.2.1.tar.gz (86.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

elo_memory-0.2.1-py3-none-any.whl (66.6 kB view details)

Uploaded Python 3

File details

Details for the file elo_memory-0.2.1.tar.gz.

File metadata

  • Download URL: elo_memory-0.2.1.tar.gz
  • Upload date:
  • Size: 86.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for elo_memory-0.2.1.tar.gz
Algorithm Hash digest
SHA256 c307b9dcb0efcb03b5235f3f69aa2f28f69195078eb498249f673a78a64dd16e
MD5 4fe330d14ae7398c9265cc5a1a87ecf3
BLAKE2b-256 421e8f70ad9015b2edf00d522b99bdc46239b72e42a9548139bc3a7e7253841f

See more details on using hashes here.

File details

Details for the file elo_memory-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: elo_memory-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 66.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for elo_memory-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 721fe6de67bf2f910fd0ed3ab250d536ad0bce17150d00fdd7db55ac195fa632
MD5 522438e6b23b5652bc2fada46393c2b3
BLAKE2b-256 789b6adead364bf24f797b54a39e63d4288ce5e6375dc91b03b376175aaf51a1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page