Skip to main content

Event Sourcing based belief formation system for long-term AI agent memory

Project description

Bubble

A memory system lets your chatbot truly know you and maybe themselves.

Core Idea

  1. Intensity gating. Not everything is equally worth remembering. The more trivial the more accumulation needed in order to form a memory entry.
  2. Input decomposition. Input is decomposed into atomic segments that preserve their original meaning, only highly related segments will cluster to form episodes which provide high precision retrieval.
  3. Replayability. Human memory functions like an event-sourcing system, under the same experience, nearly identical personality would emerge again.
  4. Lightweight and minimal LLM calls.
    paper

How it works

[ raw input ]
                               │
                         ┌─────▼─────┐
                         │ decompose │
                         └─────┬─────┘
                               │
              ┌────────────────┴────────────────┐
          ι ≥ θ                             ι < θ
              │                                 │
        vivid signal                       weak signal
              │                                 │
         ┌────▼────┐                     ┌──────▼──────┐
         │ archive │                     │    pool     │
         └────┬────┘                     │  · · · · ·  │
              │                          │ ·  · ·  · · │
              │                          │  · · · ·  · │
              │                          └──────┬──────┘
              │                                 │
              │                          enough gathered?
              │                                 │
              │                    no ──────────┘
              │                                 │ yes
              │                          ┌──────▼──────┐
              │                          │   cluster   │
              │                          │   + score   │
              │                          └──────┬──────┘
              │                                 │
              └──────────────┬──────────────────┘
                             │
                       ┌─────▼─────┐
                       │  episode  │  immutable
                       └─────┬─────┘
                             │
                     same topic chain?(NLI)
                      yes │       │ no
                          │       │
               ┌──────────▼─┐   ┌▼────────────┐
   joins chain │ ... ──► e  │   │     e       │  new chain
               └──────────┬─┘   └─────┬───────┘
                          │           │
                    ┌─────▼───────────▼─────┐
                    │       snapshot        │
                    │  centroid  │  summary │ 
                    │  (eager)   │  (lazy)  │
                    └───────────┬───────────┘
                                │
                           [ retrieve ]
                                │
               ┌────────────────┴─────────────────┐
           default                            verbose
               │                                  │
         snapshot summary             with episode chain + labels

Setup

1.run Falkordb

docker run -e REDIS_ARGS="--appendonly yes --appendfsync everysec" -v <PATH>:/var/lib/falkordb/data -p 3000:3000 -p 6379:6379 -d --name falkordb falkordb/falkordb

2.embedding model(Matryoshka)

note: command below is cpu version

docker run --name tei-embedding -d -p 8997:80 -v <PATH>:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-latest --model-id nomic-ai/nomic-embed-text-v1.5

Or embedding cloud api

3.NLI model(Optional, but recommended, saves some LLM calls)

note: command below is cpu version

docker run --name tei-nli -d -p 8999:80 -v <PATH>:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-latest --model-id cross-encoder/nli-deberta-v3-small

Necessary configuration in your .env file

# Provider to use: anthropic | openai | gemini  (default: anthropic)
BUBBLE_LLM_PROVIDER=anthropic

# Anthropic (install: pip install 'bubble-memory[anthropic]')
ANTHROPIC_API_KEY=

# OpenAI-compatible — works with OpenAI, DeepSeek, Groq, Ollama, etc.
# (install: pip install 'bubble-memory[openai]')
# OPENAI_API_KEY=
# OPENAI_BASE_URL=https://api.openai.com/v1

# Google Gemini (install: pip install 'bubble-memory[gemini]')
# GEMINI_API_KEY=

FALKORDB_HOST=localhost
FALKORDB_PORT=6379

BUBBLE_EMBED_DIM=768
BUBBLE_EMBED_ENDPOINT=http://localhost:8997/v1/embeddings

#If you have NLI setup
BUBBLE_ENABLE_NLI=true
BUBBLE_NLI_ENDPOINT=http://localhost:8999/predict

How to use (extremely easy and clean)

installation

pip install bubble-memory
or
uv add bubble-memory

ingest

import bubble
bubble.process(user_id, content, prior)

prior: the context of the content, for example prior messages

retrieve

import bubble
memory_user = await bubble.retrieve(user_id, query)

Replayability

Memory episodes are archived in <project root>/data/archive as jsonl
You can reconstruct your memory graph by a single command. No LLM call.

python -m bubble.main replay <user_id>

Tuning/Customization

See .env.example for ALL tunable arguments.

Limitations

Bubble is currently an experimental project for personal use.
Current promotion formula, tunable variables might not be the best.
Swap out bubble.llm.prompts.DECOMPOSE_SYSTEM if it doesn't fit your use case.

Leave a star if you like this work. Contributions are welcome.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bubble_memory-0.11.1.tar.gz (28.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bubble_memory-0.11.1-py3-none-any.whl (32.2 kB view details)

Uploaded Python 3

File details

Details for the file bubble_memory-0.11.1.tar.gz.

File metadata

  • Download URL: bubble_memory-0.11.1.tar.gz
  • Upload date:
  • Size: 28.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bubble_memory-0.11.1.tar.gz
Algorithm Hash digest
SHA256 0ec463c60b478e76835f8201aabeeb34339a0c25ad99a709d55ce7141fdbff66
MD5 97708ec9354676a74f42d797cca914e8
BLAKE2b-256 84fbe8a9bedb4958a168946755bedbbffa4c72b317945cd255aab43cc9c70116

See more details on using hashes here.

File details

Details for the file bubble_memory-0.11.1-py3-none-any.whl.

File metadata

  • Download URL: bubble_memory-0.11.1-py3-none-any.whl
  • Upload date:
  • Size: 32.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bubble_memory-0.11.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4a2946974ed9f93e99ce0f595eb368c016255d52a1d71fbeaec54bc9f86d38b4
MD5 50a84a5bdf95ca2d4748b0c1d4765cd2
BLAKE2b-256 6015ca0419e1fbb4d1b904b5b630c26cfba69f83999f99ab2181dfadc135b32b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page