Agentic brain DB - the cognitive layer for AI agents
Project description
An embedded memory database for AI agents. Facts get written with confidence scores, expire, get contradicted, decay by recency. Retrieval fuses vector similarity, BM25, graph structure, and recency in one call. Everything goes through a typed DSL. Runs in-process, persists to SQLite.
Status: v0.6.0, alpha.
Install
pip install graphstore
Core ships with model2vec as the default embedder. Swap for Jina v5, bge-*, EmbeddingGemma, or any ONNX / GGUF model via graphstore install-embedder. PDFs, images, audio, GPU, and the web UI are opt-in extras.
pip install 'graphstore[ingest]' # PDF / DOCX / HTML
pip install 'graphstore[vision]' # local VLM for images + scanned PDFs
pip install 'graphstore[audio]' # faster-whisper speech-to-text
pip install 'graphstore[playground]' # FastAPI web UI
pip install 'graphstore[gpu]' # onnxruntime-gpu, Linux x86_64, CUDA 12
pip install 'graphstore[pro]' # one-shot agentic memory bundle (see Pro mode below)
Full extras matrix: Installation.
Quickstart
from graphstore import GraphStore
g = GraphStore(path="./brain")
g.execute('CREATE NODE "mem:paris" kind = "memory" '
'DOCUMENT "Paris is the capital of France, famous for the Eiffel Tower."')
g.execute('CREATE NODE "mem:rome" kind = "memory" '
'DOCUMENT "Rome is the capital of Italy, home to the Colosseum."')
g.execute('CREATE EDGE "mem:paris" -> "mem:rome" kind = "both_european_capitals"')
g.execute('REMEMBER "European history" LIMIT 5') # hybrid fusion
g.execute('RECALL FROM "mem:paris" DEPTH 2 LIMIT 10') # graph walk
g.execute('LEXICAL SEARCH "Eiffel Tower" LIMIT 5') # BM25
g.execute('SIMILAR TO "capital city" LIMIT 5') # vector only
DOCUMENT "text" populates the vector index, FTS5 index, and blob storage in one shot. Without it, a node is structured data only.
Natural-language ingest (Bonsai)
For agent-conversation memory, writing DSL by hand is the wrong abstraction. graphstore ships BonsaiIngestor, an LLM-driven NL→DSL converter built on a 4B Ternary-Bonsai GGUF (1.1 GB, runs on CPU at ~20 tok/s, ~150 tok/s on a CUDA 12 GPU). It reads natural-language turns and emits the DSL statements that mirror them.
from graphstore import GraphStore
from graphstore.bonsai_ingestor import BonsaiIngestor, _DEFAULT_LITE_PROMPT_PATH
g = GraphStore(path="./brain")
ing = BonsaiIngestor(
model_path="./models/Ternary-Bonsai-4B-TQ1_0.gguf",
gs=g,
skill_path=str(_DEFAULT_LITE_PROMPT_PATH), # or omit for the full prompt
n_gpu_layers=-1, # 0 for CPU; -1 to offload all
)
ing.ingest("Kailash joined OpenAI.", msg_id="m1") # @UPSERT/@UPSERT/@EDGE
ing.ingest("I prefer tea to coffee.", msg_id="m2") # @BELIEF
ing.ingest("Maria moved to Berlin.", msg_id="m3")
ing.ingest("Actually I drink coffee now.", msg_id="m4") # @RETRACT + @BELIEF
# Retrieval is the same NL surface
ing.ingest("Where does Maria work?", msg_id="q1", dry_run=True) # -> @ANSWER
Prompt variants:
bonsai_dsl_prompt_lite.txt(~600 system tokens, 16 verbs, ingest+retrieval): production sweet spot.bonsai_dsl_prompt.txt(~1700 system tokens, ~50 verbs, all admin DSL): full control surface.
Persistent KV cache (kv_cache_path=...) cuts cold start from ~10 s to ~1 s across process restarts.
Architecture
Three engines behind one DSL.
- Graph: columnar numpy arrays + scipy CSR edge matrices. Reserved columns
__event_at__,__confidence__,__retracted__,__source__are first-class. - Vector: usearch HNSW, cosine. Auto-embedding on
DOCUMENTorEMBED contentschemas. - Document: SQLite + FTS5 for BM25 and blobs. Single-owner advisory lock on the path.
The DSL is Lark LALR(1). Every write, read, INGEST, and SYS * goes through it.
Deep dive: Architecture · Edge matrix.
REMEMBER
REMEMBER fuses four signals at retrieval time. SIMILAR, LEXICAL, RECALL each expose a single leg.
| Signal | Default weight | Source |
|---|---|---|
vec_signal |
0.52 | max sentence cosine over usearch ANN |
bm25_signal |
0.25 | SQLite FTS5 over doc_fts |
recency |
0.15 | exp(-age / half_life) from __event_at__ |
graph_signal |
0.08 | sum of entity degrees |
Weights are configurable via graphstore.json, GRAPHSTORE_DSL_* env vars, or constructor kwargs.
Every result returns per-signal scores on every node and a meta["signals"] block with the full pipeline state (fusion weights, per-stage candidate counts, reranker status):
r = g.execute('REMEMBER "Caroline counseling" LIMIT 1 WHERE kind = "message"')
n = r.data[0]
print(n["_remember_score"], n["_vector_sim"], n["_bm25_score"],
n["_recency_score"], n["_graph_score"], n["_co_bonus"],
n["_recall_boost"], n["_rank_stage"])
r.meta["signals"] # {fusion, recency, stages, reranker, nucleus, ...}
Dry-run the pipeline without mutating recall counts:
g.execute('SYS EXPLAIN REMEMBER "Caroline counseling" LIMIT 3')
# kind="plan", candidates with per-signal scores, full meta["signals"]
Deep dive: REMEMBER pipeline.
ANSWER (retrieval + reader LLM)
For a full retrieve + synthesize loop, wire a reader callable and use ANSWER:
def my_reader(prompt: str, max_tokens: int = 1000) -> str:
... # call any LLM (openai, litellm, local, ...)
g = GraphStore(path="./brain", reader=my_reader)
r = g.execute('ANSWER "What is the capital of France?" LIMIT 3')
r.data["answer"] # "Paris"
r.data["cited_slots"] # ["mem:paris", ...]
r.meta["signals"] # same telemetry as REMEMBER
graphstore ships no LLM dependency. The reader is a plain callable; bring your own. Named readers (GraphStore(readers={"fast": a, "careful": b})) enable A/B via ANSWER "q" USING "careful".
Typed query builder
Every DSL verb has a typed function. Same grammar, IDE autocomplete, injection-safe.
from graphstore import q, F, Time
q.create_node("mem:paris", kind="memory",
document="Paris is the capital of France.").execute(g)
recent = F.gte("__event_at__", Time.now_minus(7, "d"))
q.nodes(where=F.eq("kind", "memory") & recent & ~F.eq("__retracted__", True))
q.batch(
q.var("x", q.create_node("n1", kind="memory", document="a")),
q.var("y", q.create_node("n2", kind="memory", document="b")),
q.create_edge("$x", "$y", kind="next"),
).execute(g)
Full reference: Query builder.
Benchmarks
LongMemEval-S, 500 records, Jina v5 Small 1024d, Kaggle T4 GPU, 2026-04-19. Public kernel: kaggle.com/code/superkaiii/graphstore-jina-v5-small.
| Overall | knowledge-update | single-session-assistant | single-session-user | multi-session | temporal | preference |
|---|---|---|---|---|---|---|
| 97.0% | 100.0% | 100.0% | 98.6% | 98.5% | 94.7% | 83.3% |
Query p50 46 ms / p95 76 ms. Retrieval-only, no LLM judge.
LoCoMo, conv-26 199Q full, jina-v5-small 1024d, MiniMax M2.7 reader. Best adapter (Bonsai NL→DSL ingest) overall token-F1 0.476 vs 0.464 deterministic NER + 0.392 remote-LLM ingest, all on the same retrieval stack. Scoring matched byte-for-byte against snap-research/locomo task_eval/evaluation.py (parity test in tests/test_locomo_scoring_parity.py).
Full methodology: Benchmarks.
GPU offload (opt-in, off by default)
graphstore never grabs a GPU implicitly. Every *_gpu_layers default is 0 (CPU). To opt in, install [gpu] (and a CUDA-built llama-cpp-python wheel for Bonsai/embedder/reranker) and call gpu.setup():
from graphstore import gpu
status = gpu.setup()
print(status.ready, status.provider, status.device_name, status.error)
gpu.setup() is idempotent and does the dirty work: discovers any nvidia-*-cu12 pip wheels under site-packages, ctypes-preloads their .so files in dependency order so LD_LIBRARY_PATH doesn't have to be set externally, then probes onnxruntime + llama-cpp-python CUDA support. On success it surfaces GRAPHSTORE_GPU=1 so the existing compute_profile gate flips automatically. Failure is structured (status.error) and falls back to CPU silently.
For per-component control, pass the explicit kwargs (CPU stays the default):
GraphStore(path="./brain", gpu_layers=-1, reranker_gpu_layers=-1)
BonsaiIngestor(model_path=..., n_gpu_layers=-1)
Pro mode
pip install 'graphstore[pro]' bundles ingest + vision + audio + embedders-extra + gpu plus huggingface-hub / tokenizers / onnxruntime. Pair it with a one-time calibration to get spec-driven validation and a calibrated Bonsai ingestor without writing the device-detection / sizing / fallback glue yourself.
pip install 'graphstore[pro]'
graphstore pro setup # download every component, probe each on this host
graphstore pro status # inspect host + spec + resolved knobs
from graphstore import GraphStore
gs = GraphStore(path="./brain", profile="pro")
# Resolver caught every shortfall up-front (extras missing, calibration
# stale, RAM/VRAM short). If we got here, the spec runs.
print(gs.pro_resolved.n_ctx, gs.pro_resolved.bonsai_n_gpu_layers)
ing = gs.create_bonsai() # n_ctx / n_batch / n_gpu_layers
ing.ingest("Maria joined OpenAI.", msg_id="m1") # all wired from calibration
Defaults match the measured-best LoCoMo configuration as of this release: jina-v5-small embedder, jina-v3 reranker, bonsai-tq1_0-lite ingest, tinybert NER. Customize via a ProSpec(...) instance. Strict by default: missing extras / missing calibration / unfit host raise ProExtraNotInstalled / ProCalibrationMissing / ProUnsupportedHostError. Pass pro_strict=False to log + continue. Linux x86_64 + NVIDIA CUDA 12 only in v1.
Full guide: Pro mode.
Scope
- Embedded, one writer per path. For multi-tenant, wrap in your own service.
- No SQL, no Cypher, no distributed cluster. Graph ops exist because agent memory is a graph.
- Fusion weights are hand-tuned. Reranking is opt-in, off by default.
- Bonsai NL→DSL ingest is opt-in via
BonsaiIngestor(...). Core install never auto-loads an LLM. - GPU offload is opt-in via
gpu.setup()or explicitn_gpu_layers=.... No silent device acquisition.
Development
git clone https://github.com/orkait/graphstore.git
cd graphstore
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev,ingest,vision,embedders-extra,playground]"
pytest
Docs site under website/ (Docusaurus). Run locally:
cd website && bun install && bun run start
License
AGPL-3.0. See LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file graphstore-0.6.0.tar.gz.
File metadata
- Download URL: graphstore-0.6.0.tar.gz
- Upload date:
- Size: 491.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b2163100117107b416f5fffae00cab732c07eb2ba69f4a4874a70aa0f85e399
|
|
| MD5 |
54cf898ad303da0d0b22de95d383b365
|
|
| BLAKE2b-256 |
c9349f09118d4c7ee7cb82d1646c41184969b75fd77c69624be41cf494a6237d
|
File details
Details for the file graphstore-0.6.0-py3-none-any.whl.
File metadata
- Download URL: graphstore-0.6.0-py3-none-any.whl
- Upload date:
- Size: 371.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
58bc4c3374fb1b5def7523f5608d1317068c7a5362b27d4e1230a775d81d54e0
|
|
| MD5 |
0280827946d6a28cb39825fa711058f7
|
|
| BLAKE2b-256 |
dc375f29dc0c3f89b5f13c13ec26f062a2ecde16c65fbbf999e52d2e2edfb0db
|