Skip to main content

The AI-native file database and memory store. Built for LLM agents to read, search, and remember.

Project description

OpenDB

3 lines to give your AI agent an AI-native database and long-term memory.
Read any file. Search any workspace. Remember everything.

PyPI version Python 3.11+ License: MIT GitHub stars

93.6% on LongMemEval — #3 on the leaderboard, beating MemMachine, Vectorize, Emergence AI, Supermemory, and Zep.
Zero embedding APIs. Zero vector databases. Just SQLite FTS5 and good engineering.


pip install open-db[cli]
opendb index ./my_workspace
opendb serve-mcp

That's it. Your agent now has 12 MCP tools — read any file format, search across documents and code, store/recall persistent memories, and switch between multiple workspaces on the fly. Works with every major agent framework out of the box.

LongMemEval Benchmark — 93.6%

OpenDB achieves 93.6% E2E accuracy on LongMemEval (ICLR 2025), the standard benchmark for AI agent long-term memory. 500 questions, 6 categories, LLM-as-judge evaluation.

System LongMemEval E2E Gen Model Retrieval Infrastructure
OMEGA 95.4% GPT-4.1 Embedding model + vector DB
Mastra 94.9% GPT-5-mini LLM + embedding model
OpenDB 93.6% qwen3.6-plus SQLite only, zero API
MemMachine 93.0% LLM + vector DB
Vectorize Hindsight 91.4% Embedding model
Emergence AI 86.0% LLM + graph DB + vector DB
Supermemory 81.6% GPT-4o Embedding model
Zep/Graphiti 71.2% Graph DB + LLM

OpenDB uses qwen3.6-plus — a significantly cheaper model than GPT-4.1 or GPT-5-mini. On the same system, Mastra showed a 10-point gap between GPT-4o (84%) and GPT-5-mini (95%), suggesting OpenDB with a frontier model would score even higher.

Per-Category Results

Category OpenDB OMEGA Supermemory Zep
single-session-assistant 100% 96.4% 80.4%
knowledge-update 97.4% 96% 88.5% 83.3%
single-session-user 97.1% 97.1% 92.9%
temporal-reasoning 95.5% 94% 76.7% 62.4%
multi-session 89.5% 83% 71.4% 57.9%
abstention 86.7%
single-session-preference 73.3% 70.0% 56.7%

OpenDB beats every competitor on temporal-reasoning (95.5% vs OMEGA's 94%), knowledge-update (97.4% vs 96%), and multi-session (89.5% vs 83%) — without embeddings, without vector databases, without graph databases.

Retrieval — 100% Recall

OpenDB (FTS5) MemPalace (ChromaDB)
R@5 100% (470/470) 96.6%
Embedding model None all-MiniLM-L6-v2
API calls 0 0
Median recall latency 1.1 ms

How?

No embeddings. No vector search. No graph databases. Three things:

  1. SQLite FTS5 — BM25 keyword search with time-decay re-ranking. 1ms recall at 10K memories.
  2. Smart conflict detection — Automatically supersedes outdated facts while preserving episodic event history.
  3. Temporal-aware prompting — Memories sorted chronologically with real session dates, giving the LLM the context it needs for temporal reasoning.

Full methodology and per-question results: benchmark/REPORT.md

Works with Every Agent Framework

OpenDB speaks MCP — the universal standard supported by all major frameworks. Pick yours:

Claude Code / Cursor / Windsurf

Add to your MCP config (.mcp.json, mcp_servers in settings, etc.):

{
  "mcpServers": {
    "opendb": {
      "command": "opendb",
      "args": ["serve-mcp", "--workspace", "/path/to/workspace"]
    }
  }
}
Claude Agent SDK (Anthropic)
from claude_agent_sdk import query, ClaudeAgentOptions
from claude_agent_sdk.mcp import MCPServerStdio

async with MCPServerStdio("opendb", ["serve-mcp", "--workspace", "./docs"]) as opendb:
    options = ClaudeAgentOptions(
        model="claude-sonnet-4-6",
        mcp_servers={"opendb": opendb},
        allowed_tools=["mcp__opendb__*"],
    )
    async for msg in query(prompt="Summarize the Q4 report", options=options):
        print(msg.content)
OpenAI Agents SDK
from agents import Agent, Runner
from agents.mcp import MCPServerStdio

async with MCPServerStdio(name="opendb", params={
    "command": "opendb", "args": ["serve-mcp", "--workspace", "./docs"]
}) as opendb:
    agent = Agent(name="Analyst", model="gpt-4.1", mcp_servers=[opendb])
    result = await Runner.run(agent, "Find all revenue mentions in the PDF reports")
    print(result.final_output)
LangChain / LangGraph
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent

async with MultiServerMCPClient({
    "opendb": {"command": "opendb", "args": ["serve-mcp", "--workspace", "./docs"], "transport": "stdio"}
}) as client:
    agent = create_react_agent("anthropic:claude-sonnet-4-6", await client.get_tools())
    result = await agent.ainvoke({"messages": [("user", "What changed in the latest spec?")]})
CrewAI
from crewai import Agent, Task, Crew
from crewai.tools import MCPServerStdio

opendb = MCPServerStdio(command="opendb", args=["serve-mcp", "--workspace", "./docs"])

analyst = Agent(role="Document Analyst", goal="Analyze workspace files", mcps=[opendb])
task = Task(description="Summarize all PDF reports in the workspace", agent=analyst)
Crew(agents=[analyst], tasks=[task]).kickoff()
AutoGen (Microsoft)
from autogen_ext.tools.mcp import mcp_server_tools, StdioServerParams
from autogen_agentchat.agents import AssistantAgent

tools = await mcp_server_tools(StdioServerParams(command="opendb", args=["serve-mcp", "--workspace", "./docs"]))
agent = AssistantAgent(name="analyst", model_client=client, tools=tools)
await agent.run("Search for deployment-related memories")
Google ADK
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool import McpToolset
from google.adk.tools.mcp_tool.mcp_session_manager import StdioConnectionParams

agent = LlmAgent(
    model="gemini-2.5-flash",
    name="analyst",
    tools=[McpToolset(connection_params=StdioConnectionParams(command="opendb", args=["serve-mcp", "--workspace", "./docs"]))],
)
Mastra (TypeScript)
import { MCPClient } from "@mastra/mcp";
import { Agent } from "@mastra/core/agent";

const mcp = new MCPClient({
  servers: { opendb: { command: "opendb", args: ["serve-mcp", "--workspace", "./docs"] } },
});

const agent = new Agent({
  name: "Analyst",
  model: "openai/gpt-4.1",
  tools: await mcp.listTools(),
});
Python (direct, no framework)
from opendb import OpenDB

db = OpenDB.open("./my_workspace")
await db.init()
await db.index()

text    = await db.read("report.pdf", pages="1-3")
results = await db.search("quarterly revenue")
await db.memory_store("User prefers concise answers")
memories = await db.memory_recall("user preferences")

await db.close()

Build Your Own Agent (No Framework Needed)

You don't need a framework. A while loop, an LLM, and OpenDB — that's a complete agent:

import json, asyncio
from anthropic import Anthropic
from opendb import OpenDB

client = Anthropic()
db = OpenDB.open("./workspace")

TOOLS = [
    {"name": "read",   "description": "Read a file",           "input_schema": {"type": "object", "properties": {"filename": {"type": "string"}}, "required": ["filename"]}},
    {"name": "search", "description": "Search across all files","input_schema": {"type": "object", "properties": {"query": {"type": "string"}},    "required": ["query"]}},
    {"name": "memory", "description": "Store a memory",         "input_schema": {"type": "object", "properties": {"content": {"type": "string"}},  "required": ["content"]}},
    {"name": "recall", "description": "Recall memories",        "input_schema": {"type": "object", "properties": {"query": {"type": "string"}},    "required": ["query"]}},
]

async def run(task: str):
    await db.init()
    await db.index()
    messages = [{"role": "user", "content": task}]

    while True:
        resp = client.messages.create(
            model="claude-sonnet-4-6", max_tokens=4096,
            system="You have tools to read files, search, and remember things.",
            tools=TOOLS, messages=messages,
        )

        # Extract text and tool calls
        for block in resp.content:
            if block.type == "text":
                print(block.text)

        if resp.stop_reason == "end_turn":
            break

        # Execute tool calls and feed results back
        tool_results = []
        for block in resp.content:
            if block.type == "tool_use":
                match block.name:
                    case "read":   result = await db.read(block.input["filename"])
                    case "search": result = await db.search(block.input["query"])
                    case "memory": result = await db.memory_store(block.input["content"])
                    case "recall": result = await db.memory_recall(block.input["query"])
                tool_results.append({"type": "tool_result", "tool_use_id": block.id,
                                     "content": json.dumps(result) if isinstance(result, dict) else str(result)})

        messages.append({"role": "assistant", "content": resp.content})
        messages.append({"role": "user", "content": tool_results})

    await db.close()

asyncio.run(run("Summarize the Q4 report and remember the key metrics"))

That's it. ~40 lines, zero abstractions, full agent capabilities. Swap Anthropic() for any LLM client — the pattern is the same.

Why OpenDB?

Without OpenDB, agents write inline parsing code for every document:

# Agent writes this every time — 500+ tokens, often fails
run_command("""python -c "
import PyMuPDF; doc = PyMuPDF.open('report.pdf')
for page in doc: print(page.get_text())
" """)

With OpenDB:

read_file("report.pdf")  # 50 tokens, always works

Benchmarked across 4 LLMs on 24 document tasks:

Metric Without OpenDB With OpenDB
Tokens used 100% 27-45% (55-73% saved)
Task speed 100% 36-58% faster
Answer quality 2.4-3.2 / 5 3.4-3.9 / 5
Success rate 79% 100%

FTS vs RAG vector retrieval (25-325 documents):

Scale FTS Tokens Saved FTS Quality RAG Quality
25 docs 47% 3.9/5 4.2/5
125 docs 44% 4.7/5 4.0/5
325 docs 45% 4.6/5 3.5/5

FTS quality improves with scale while RAG degrades from distractor noise. See benchmark/REPORT.md for methodology.

MCP Tools

12 tools, auto-discovered by any MCP-compatible agent:

opendb_info — Workspace overview

opendb_info()
-> Workspace: 47 files (ready: 45, processing: 1, failed: 1)
  By type:  Python (.py) 20 | PDF 12 | Excel (.xlsx) 5 | ...
  Recently updated:  config.yaml (2 min ago) | main.py (1 hr ago)

opendb_read — Read any file

Code with line numbers, documents as plain text, spreadsheets as structured JSON.

opendb_read(filename="main.py")                            # Code with line numbers
opendb_read(filename="report.pdf", pages="1-3")            # PDF pages
opendb_read(filename="report.pdf", grep="revenue+growth")  # Search within file
opendb_read(filename="budget.xlsx", format="json")          # Structured spreadsheet
opendb_read(filename="app.py", offset=50, limit=31)         # Lines 50-80

opendb_search — Search across code and documents

Regex grep for code, full-text search for documents. Auto-detects mode.

opendb_search(query="def main", path="/workspace", glob="*.py")   # Grep code
opendb_search(query="quarterly revenue")                           # FTS documents
opendb_search(query="TODO", path="/src", case_insensitive=True)    # Case insensitive

opendb_glob — Find files

opendb_glob(pattern="**/*.py", path="/workspace")
opendb_glob(pattern="src/**/*.{ts,tsx}", path="/workspace")

opendb_memory_store — Store a memory

opendb_memory_store(content="User prefers dark mode", memory_type="semantic")
opendb_memory_store(content="Deployed v2.1, rollback required", memory_type="episodic", tags=["deploy"])
opendb_memory_store(content="Always run tests before merging", memory_type="procedural")
opendb_memory_store(content="User is a senior engineer at Acme", pinned=true)

Three memory types: semantic (facts/knowledge), episodic (events/outcomes), procedural (workflows/rules).

Set pinned=true for critical facts — they get 10x ranking boost and can be retrieved instantly with pinned_only=true.

opendb_memory_recall — Search memories

Results ranked by relevance x recency. Pinned memories always surface first.

opendb_memory_recall(query="user preferences")
opendb_memory_recall(query="deploy", memory_type="episodic")
opendb_memory_recall(pinned_only=true)   # Instant — no search needed, ideal for agent startup

opendb_memory_forget — Delete memories

opendb_memory_forget(memory_id="abc-123-def")
opendb_memory_forget(query="outdated preferences")

Workspace management — switch between projects on the fly

An agent working across multiple projects can list, add, and switch workspaces at runtime — no server restart, sub-millisecond switching after first open. The backend keeps each workspace's SQLite connection warm, so switching back and forth is just a pointer flip.

opendb_list_workspaces()
-> Active: [a3f2b1c8] openDB  (D:/work/openDB)
   Known workspaces (3):
   * [a3f2b1c8] openDB        D:/work/openDB       (last used 2026-04-10 14:22)
     [7d9e0422] my-notes      C:/Users/me/notes    (last used 2026-04-09 10:11)
     [e18a9f03] client-docs   D:/clients/acme      (last used 2026-04-08 17:45)

opendb_use_workspace(id_or_root="7d9e0422")         # Switch by id
opendb_use_workspace(id_or_root="D:/clients/acme")  # ...or by path
opendb_add_workspace(root="./new_project", switch=True)
opendb_current_workspace()
opendb_remove_workspace(id_or_root="e18a9f03")

Workspaces are persisted in ~/.opendb/workspaces.json (override with FILEDB_STATE_DIR). Every opendb_read / opendb_search / opendb_glob / opendb_memory_* call targets the currently-active workspace.

Agent Memory

OpenDB doubles as a long-term memory store for AI agents — persistent across sessions, ranked by relevance and recency, with pinned priorities.

Why not Markdown files?

Markdown files OpenDB Memory
Search Full-file scan, substring match FTS5 BM25 index, O(log n)
Ranking None — all matches are equal Relevance x recency decay
Capacity Claude Code: 200-line hard limit No hard limit, indexed
CJK Broken (no word segmentation) jieba tokenization, native CJK
Staleness Old = new, manual cleanup 0.5^(age/30) auto-decay
Structure Free text + frontmatter tags[], metadata{}, memory_type, pinned
Agent cost Tokens spent on file management 3 API calls: store/recall/forget

Why not vector databases?

FTS quality improves with scale while vector/RAG degrades. Vector similarity retrieves topically-similar noise; FTS retrieves exactly what the agent asked for.

OpenDB (FTS) Vector (cosine)
Recall accuracy 90% 100%
Recall latency 0.57ms 223.76ms
Speed 393x faster baseline
Embedding tokens 0 454
API calls 0 21

The 10% accuracy gap comes from synonyms ("food allergy" vs "allergic to shellfish"). For everything else — keyword recall, temporal queries, knowledge updates, multi-session reasoning — FTS wins while costing nothing.

Memory stress tests — 23/23 (100%)

Suite Result Description
Knowledge Update 5/5 Conflict detection auto-supersedes stale facts
Abstention 5/5 FTS correctly returns empty for unrelated queries
Temporal Reasoning 4/4 Recency-biased ranking surfaces latest events
CJK Support 5/5 Chinese, Japanese, mixed CJK-English
Memory Scale (10K) 4/4 0.5ms recall at 10,000 memories

Document search scalability

Documents Needle Accuracy Search p50 Search p95
500 100% 0.44ms 1.00ms
1,000 100% 0.62ms 1.99ms
5,000 100% 0.75ms 7.19ms

Search time scales sublinearly (10x docs -> 1.7x latency).

Supported Formats

Format Extensions Features
PDF .pdf Pages, tables, OCR for scanned docs
Word .docx Page breaks, tables, headings
PowerPoint .pptx Slides, speaker notes, tables
Excel .xlsx Multiple sheets, structured JSON output
CSV .csv Auto-encoding detection, structured JSON
Code .py .js .ts .go .rs .java ... Line-numbered output
Text .txt .md .html .json .xml Paragraph chunking
Images .png .jpg .tiff .bmp OCR (English + Chinese)

Key Features

  • 3-line setuppip install, index, serve-mcp — works with every agent framework
  • 12 MCP toolsread, search, glob, info for files; memory_store, memory_recall, memory_forget for memory; list_workspaces, use_workspace, add_workspace, remove_workspace, current_workspace for multi-project workspace switching
  • Runtime workspace switching — agents can list/add/switch workspaces at runtime with no server restart; already-opened workspaces switch in sub-millisecond
  • 93.6% LongMemEval — #3 on the leaderboard with a cheap model and zero retrieval infrastructure
  • 100% R@5 retrieval — Perfect memory recall, 1.1ms median latency, zero embedding API calls
  • Dual-mode — Embedded (SQLite, zero-config) or Server (PostgreSQL, shared access); same API
  • Real-time sync — Directories are watched via OS-native events after indexing
  • Full-text search — FTS5 / tsvector with jieba CJK tokenization
  • Structured output — Spreadsheets as {sheets: [{columns, rows}]} for direct analysis
  • Fuzzy filename resolution — Find files by exact name, partial match, path, or UUID

REST API

OpenDB also exposes a full HTTP API. Run with opendb serve (embedded) or docker-compose up (PostgreSQL).

Endpoint Method Description
/info GET Workspace statistics
/read/{filename} GET Read file (?pages=, ?lines=, ?grep=, ?format=json)
/search POST Full-text search or regex grep
/glob GET Find files by glob pattern
/index POST Index a directory and start watching
/files POST/GET Upload or list files
/memory POST/GET Store or list memories
/memory/recall POST Search memories with ranking
/memory/forget POST Delete memories
/workspaces GET/POST List or register workspaces
/workspaces/active GET/PUT Get or switch active workspace
/workspaces/{id} DELETE Unregister a workspace
/health GET Health check

Configuration

Environment variables (FILEDB_ prefix):

Variable Default Description
FILEDB_BACKEND postgres postgres or sqlite
FILEDB_DATABASE_URL postgresql://... PostgreSQL connection
FILEDB_OCR_ENABLED true Enable Tesseract OCR
FILEDB_OCR_LANGUAGES eng+chi_sim+chi_tra OCR languages
FILEDB_MAX_FILE_SIZE 104857600 Max file size (100MB)
FILEDB_INDEX_EXCLUDE_PATTERNS [] Exclude patterns for indexing
FILEDB_STATE_DIR ~/.opendb Location of the global workspace registry (workspaces.json)
OPENDB_URL http://localhost:8000 MCP server -> REST API URL

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

pip install -e ".[dev]"
pytest

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

open_db-1.5.0.tar.gz (118.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

open_db-1.5.0-py3-none-any.whl (116.4 kB view details)

Uploaded Python 3

File details

Details for the file open_db-1.5.0.tar.gz.

File metadata

  • Download URL: open_db-1.5.0.tar.gz
  • Upload date:
  • Size: 118.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for open_db-1.5.0.tar.gz
Algorithm Hash digest
SHA256 3cb1c8abc4bbf3feb9f86f0f3c7fbfc1999a7f2ef03b1bd7dfe3246852c4da91
MD5 4fc41ecfbc01d28510552978e34103a2
BLAKE2b-256 027853f6dcd50d5053d8382e7c79bf435ecf64733b085005c0be63b2ca8171ec

See more details on using hashes here.

Provenance

The following attestation bundles were made for open_db-1.5.0.tar.gz:

Publisher: publish.yml on wuwangzhang1216/openDB

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file open_db-1.5.0-py3-none-any.whl.

File metadata

  • Download URL: open_db-1.5.0-py3-none-any.whl
  • Upload date:
  • Size: 116.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for open_db-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ffd0e155589eb4697707abb101383f2d794325e94044a53ea680091a5b68b42e
MD5 66e75f9e1b1d9e855450edf28b77127a
BLAKE2b-256 54027089b5c64e9175ef7129af91d60026ad5337431865bfa123b631941b0765

See more details on using hashes here.

Provenance

The following attestation bundles were made for open_db-1.5.0-py3-none-any.whl:

Publisher: publish.yml on wuwangzhang1216/openDB

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page