Personal conversation knowledge base: import, search, and analyze conversations from ChatGPT, Claude, Gemini, and Claude Code
Project description
llm-memex
Personal conversation knowledge base. Import, search, and analyze conversations from ChatGPT, Claude, Gemini, and Claude Code. MCP-first for LLM agent access.
Install
pip install py-llm-memex
For development:
git clone https://github.com/queelius/llm-memex
cd llm-memex
pip install -e ".[dev]"
Quick Start
Import conversations:
llm-memex import conversations.json # auto-detects format
llm-memex import ~/.claude/projects/ # directory of Claude Code sessions
llm-memex import export.json --format openai # force format
Browse and search:
llm-memex show # list conversations
llm-memex show <id> # view a conversation
llm-memex show --search "topic" # full-text search
Query the database: use the MCP server (for LLM agents) or sqlite3 directly:
sqlite3 ~/.memex/conversations/conversations.db "SELECT count(*) FROM conversations"
Export:
llm-memex export output.md --format markdown
llm-memex export output.json --format json
llm-memex export ./archive --format arkiv # universal archive format
llm-memex export ./site --format html # self-contained HTML SPA
MCP server (for Claude Desktop, agent SDKs, etc.):
llm-memex mcp
Scripts:
llm-memex run --list # available scripts
llm-memex run redact --words "secret" --level word --apply
llm-memex run redact --pattern-file api_keys.txt --apply
llm-memex run enrich_trivial --apply
Supported Formats
| Format | Import | Export | Notes |
|---|---|---|---|
| OpenAI (ChatGPT) | Yes | - | JSON export |
| Anthropic (Claude) | Yes | - | JSON export |
| Gemini | Yes | - | JSON export |
| Claude Code | Yes | - | JSONL, conversation-only mode |
| Claude Code (full) | Yes | - | Full fidelity: tool_use, thinking, subagents |
| Markdown | - | Yes | |
| JSON | - | Yes | |
| HTML (SPA) | - | Yes | Self-contained, light/dark, librarian chat |
| Arkiv | - | Yes | Universal record format (JSONL + schema.yaml) |
HTML Export
The HTML exporter builds a self-contained single-page app that loads the SQLite database client-side via sql.js (Wasm). Features:
- Light/dark mode: follows OS preference with a manual toggle
- Full browser UI: conversation list, search, filter by source/tag, timeline sparkline
- Librarian chat: ask questions about your archive. An LLM queries the database via Anthropic tool use, using the
metafunctor-edgeproxy by default (no API key required). You can also configure a direct Anthropic endpoint. - Per-conversation resume chat: continue an existing conversation
- Marginalia: annotate messages and conversations with free-form notes, inline in the browser
Notes (Marginalia)
Annotate messages and conversations with free-form text notes. Notes are stored in a dedicated notes table with FTS5 search, and appear across all surfaces (CLI, MCP, HTML SPA, exporters).
llm-memex run note add --conv <id> "this was a turning point" --apply
llm-memex run note add --conv <id> --msg <id> "key insight here" --apply
llm-memex run note list --conv <id>
llm-memex run note search "turning point"
llm-memex run note delete <note_id> --apply
Notes are included in exports by default. Use --no-notes to strip them:
llm-memex export ./public --format html # includes notes
llm-memex export ./public --format html --no-notes # strips notes
In the HTML SPA, click the pencil icon on any message or conversation header to add a note inline. Notes persist in the browser's sql.js copy and are included when you download the DB.
Multi-Database Config
llm-memex supports multiple named databases via ~/.memex/config.yaml:
primary: conversations
databases:
conversations:
path: ~/.memex/conversations
claude_code_full:
path: ~/.memex/claude_code_full
sandbox:
path: ~/.memex/sandbox
All CLI commands and MCP tools accept --db <name> (CLI) or db=<name> (MCP) to target a specific database. The primary database is used when no name is specified.
MCP Tools
When running as an MCP server, llm-memex exposes 6 tools:
execute_sql: Primary read interface. All queries via SQL (read-only by default).get_conversation: Tree-aware retrieval + export (metadata, messages, markdown/JSON).get_conversations: Bulk retrieval with filters (tag, source, model, search, ids) and optional full messages.update_conversations: Modify properties, tags, and enrichments (bulk).append_message: Add messages to conversation trees.add_note: Annotate a message or conversation with a free-form text note.
Resources: llm-memex://schema (DDL + query patterns), llm-memex://databases (multi-db discovery + stats).
Development
pytest tests/llm_memex/ -v # run tests
pytest tests/llm_memex/ --cov=llm_memex # with coverage
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_memex-0.12.0.tar.gz.
File metadata
- Download URL: llm_memex-0.12.0.tar.gz
- Upload date:
- Size: 73.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ae8d8e0d564b7749951f5ddb6d8da009c67b40c66c5c4a4ea481bdb875cd7042
|
|
| MD5 |
1b99ba42e811d545680c8f5fc2e8afdf
|
|
| BLAKE2b-256 |
466c10585901fdb7d6603061fefbd1aa365c9e149a772c08bca49b4e4b7b35c9
|
File details
Details for the file llm_memex-0.12.0-py3-none-any.whl.
File metadata
- Download URL: llm_memex-0.12.0-py3-none-any.whl
- Upload date:
- Size: 82.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4482920379c141335c560607597158594aa20b1a08a0b37f90ab5ebc98abd4c6
|
|
| MD5 |
3fb2d67308b449bcf6cda98fa7df44fe
|
|
| BLAKE2b-256 |
76d7eaaea7b210a9f66189451319a8315a48ddeda29f5d5c50422de54f5186b0
|