Skip to main content

OpenKB: Open LLM Knowledge Base, powered by PageIndex

Project description

OpenKB (by PageIndex)

OpenKB — Open LLM Knowledge Base

Scale to long documents  •  Reasoning-based retrieval  •  Native multi-modality  •  No Vector DB


📑 What is OpenKB

OpenKB (Open Knowledge Base) is an open-source system (in CLI) that compiles raw documents into a structured, interlinked wiki-style knowledge base using LLMs, powered by PageIndex for vectorless long document retrieval.

The idea is based on a concept described by Andrej Karpathy: LLMs generate summaries, concept pages, and cross-references, all maintained automatically. Knowledge compounds over time instead of being re-derived on every query.

Why not just traditional RAG?

Traditional RAG rediscovers knowledge from scratch on every query. Nothing accumulates. OpenKB compiles knowledge once into a persistent wiki, then keeps it current. Cross-references already exist. Contradictions are flagged. Synthesis reflects everything consumed.

Features

  • Broad format support — PDF, Word, Markdown, PowerPoint, HTML, Excel, CSV, text, and more via markitdown
  • Scale to long documents — Long and complex documents are handled via PageIndex tree indexing, enabling accurate, vectorless long-context retrieval
  • Native multi-modality — Retrieves and understands figures, tables, and images, not just text
  • Compiled Wiki — LLM manages and compiles your documents into summaries, concept pages, and cross-links, all kept in sync
  • Query — Ask questions (one-off) against your wiki. The LLM navigates your compiled knowledge to answer
  • Interactive Chat — Multi-turn conversations with persisted sessions you can resume across runs
  • Lint — Health checks find contradictions, gaps, orphans, and stale content
  • Watch mode — Drop files into raw/, wiki updates automatically
  • Obsidian compatible — Wiki is plain .md files with [[wikilinks]]. Open in Obsidian for graph view and browsing

🚀 Getting Started

Install

pip install openkb

Quick start

# 1. Create a directory for your knowledge base
mkdir my-kb && cd my-kb

# 2. Initialize the knowledge base
openkb init

# 3. Add documents
openkb add paper.pdf
openkb add ~/papers/                   # Add a whole directory
openkb add article.html

# 4. Ask a question
openkb query "What are the main findings?"

# 5. Or start an interactive chat session
openkb chat

Set up your LLM

OpenKB comes with multi-LLM support (e.g., OpenAI, Claude, Gemini) via LiteLLM (pinned to a safe version).

Set your model during openkb init, or in .openkb/config.yaml, using provider/model LiteLLM format (like anthropic/claude-sonnet-4-6). OpenAI models can omit the prefix (like gpt-5.4).

Create a .env file with your LLM API key:

LLM_API_KEY=your_llm_api_key

🧩 How It Works

Architecture

raw/                              You drop files here
 │
 ├─ Short docs ──→ markitdown ──→ LLM reads full text
 │                                     │
 ├─ Long PDFs ──→ PageIndex ────→ LLM reads document trees
 │                                     │
 │                                     ▼
 │                         Wiki Compilation (using LLM)
 │                                     │
 ▼                                     ▼
wiki/
 ├── index.md            Knowledge base overview
 ├── log.md              Operations timeline
 ├── AGENTS.md           Wiki schema (LLM instructions)
 ├── sources/            Full-text conversions
 ├── summaries/          Per-document summaries
 ├── concepts/           Cross-document synthesis ← the good stuff
 ├── explorations/       Saved query results
 └── reports/            Lint reports

Short vs. long document handling

Short documents Long documents (PDF ≥ 20 pages)
Convert markitdown → Markdown PageIndex → tree index + summaries
Images Extracted inline (pymupdf) Extracted by PageIndex
LLM reads Full text Document trees
Result summary + concepts summary + concepts

Short docs are read in full by the LLM. Long PDFs are indexed by PageIndex into a hierarchical tree with summaries. The LLM reads the tree instead of the full text, enabling better retrieval from long documents.

The wiki compiles knowledge

When you add a document, the LLM:

  1. Generates a summary page
  2. Reads existing concept pages
  3. Creates or updates concepts with cross-document synthesis
  4. Updates the index and log

A single source might touch 10-15 wiki pages. Knowledge accumulates: each document enriches the existing wiki rather than sitting in isolation.

📦 Usage

Commands

Command Description
openkb init Initialize a new knowledge base (interactive)
openkb add <file_or_dir> Add documents and compile to wiki
openkb query "question" Ask a question against the knowledge base
openkb query "question" --save Ask and save the answer to wiki/explorations/
openkb chat Start an interactive multi-turn chat (use --resume, --list, --delete to manage sessions)
openkb watch Watch raw/ and auto-compile new files
openkb lint Run structural + knowledge health checks
openkb list List indexed documents and concepts
openkb status Show knowledge base stats

Interactive chat

openkb chat opens an interactive chat session over your wiki knowledge base. Unlike the one-shot openkb query, each turn carries the conversation history, so you can dig into a topic without re-typing context.

openkb chat                       # start a new session
openkb chat --resume              # resume the most recent session
openkb chat --resume 20260411     # resume by id (unique prefix works)
openkb chat --list                # list all sessions
openkb chat --delete <id>         # delete a session

/help lists all slash commands: e.g., /save exports the transcript, /clear starts a fresh session.

Configuration

Settings are initialized by openkb init, and stored in .openkb/config.yaml:

model: gpt-5.4                   # LLM model (any LiteLLM-supported provider)
language: en                     # Wiki output language
pageindex_threshold: 20          # PDF pages threshold for PageIndex

Model names use provider/model LiteLLM format (OpenAI models can omit the prefix):

Provider Model example
OpenAI gpt-5.4
Anthropic anthropic/claude-sonnet-4-6
Gemini gemini/gemini-3.1-pro-preview

PageIndex integration

Long documents are challenging for LLMs due to context limits, context rot, and summarization loss. PageIndex solves this with vectorless, reasoning-based retrieval — building a hierarchical tree index that lets LLMs reason over the index for context-aware retrieval.

PageIndex runs locally by default using the open-source version, with no external dependencies required.

Optional: Cloud Support

For large or complex PDFs, PageIndex Cloud can be used to access additional capabilities, including:

  • OCR support for scanned PDFs (via hosted VLM models)
  • Faster structure generation
  • Scalable indexing for large documents

Set PAGEINDEX_API_KEY in your .env to enable cloud features:

PAGEINDEX_API_KEY=your_pageindex_api_key

AGENTS.md

The wiki/AGENTS.md file defines wiki structure and conventions. It's the LLM's instruction manual for maintaining the wiki. Customize it to change how your wiki is organized.

At runtime, the LLM reads AGENTS.md from disk, so your edits take effect immediately.

Using with Obsidian

OpenKB's wiki is a directory of Markdown files with [[wikilinks]]. Obsidian renders it natively.

  1. Open wiki/ as an Obsidian vault
  2. Browse summaries, concepts, and explorations
  3. Use graph view to see knowledge connections
  4. Use Obsidian Web Clipper to add web articles to raw/

🧭 Learn More

Compared to Karpathy's Approach

Karpathy's workflow OpenKB
Short documents LLM reads directly markitdown → LLM reads
Long documents Context limits, context rot PageIndex tree index
Supported formats Web clipper → .md PDF, Word, PPT, Excel, HTML, text, CSV, .md
Wiki compilation LLM agent LLM agent (same)
Q&A Query over wiki Wiki + PageIndex retrieval

Tech Stack

  • PageIndex — Vectorless, reasoning-based document indexing and retrieval
  • markitdown — Universal file-to-markdown conversion
  • OpenAI Agents SDK — Agent framework (supports non-OpenAI models via LiteLLM)
  • LiteLLM — Multi-provider LLM gateway
  • Click — CLI framework
  • watchdog — Filesystem monitoring

Roadmap

  • Extend long document handling to non-PDF formats
  • Scale to large document collections with nested folder support
  • Hierarchical concept (topic) indexing for massive knowledge bases
  • Database-backed storage engine
  • Web UI for browsing and managing wikis

Contributing

Contributions are welcome! Please submit a pull request, or open an issue for bugs or feature requests. For larger changes, consider opening an issue first to discuss the approach.

License

Apache 2.0. See LICENSE.

Support Us

If you find OpenKB useful, give us a star 🌟 — and check out PageIndex too!

TwitterLinkedInContact Us

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openkb-0.1.2.tar.gz (47.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openkb-0.1.2-py3-none-any.whl (52.6 kB view details)

Uploaded Python 3

File details

Details for the file openkb-0.1.2.tar.gz.

File metadata

  • Download URL: openkb-0.1.2.tar.gz
  • Upload date:
  • Size: 47.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.2 CPython/3.11.9 Darwin/25.5.0

File hashes

Hashes for openkb-0.1.2.tar.gz
Algorithm Hash digest
SHA256 8d2c04b2e8fa1da7bf20028a0e457b50b3c78c0a4f44681877e111f3d9fa7797
MD5 849c5b2d535eded3cdd253d19d4ed117
BLAKE2b-256 b301e0e92cda8e5728761ec765f85f2577d0f69504150e5129ae00bdbbcad993

See more details on using hashes here.

File details

Details for the file openkb-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: openkb-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 52.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.2 CPython/3.11.9 Darwin/25.5.0

File hashes

Hashes for openkb-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 7e646ad4021fffb9edfc867c4ef14c52e8adf9a66cd11787b4fd9bed6551a6a0
MD5 e6a4131805d51c49f9fc640c10ba4605
BLAKE2b-256 fb559e14b358bb0ee32ba99626178be8549ebcd5446981f43719fb5ea2a322e5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page