Skip to main content

Local knowledge base for documents and code. Search, ask questions, or chat — standalone or as an AI agent backend via MCP. Fully offline with Ollama.

Project description

lilbee

Beta — feedback and bug reports welcome. Open an issue.

PyPI Python 3.11+ CI Coverage Typed Ruff Platforms License: MIT Downloads

Interactively or programmatically chat with a database of documents using strictly your own hardware, completely offline. Augment any AI agent via MCP or shell — take a free model or even a frontier model and make it better. Talks to an incredible amount of data formats (see supported formats). Integrate document search into your favorite GUI using the built-in REST API — no need for a separate web app when you already have a preferred GUI (see Obsidian plugin).



Why lilbee

  • Your hardware, your data — chat with your documents completely offline. No cloud, no telemetry, no API keys required
  • Make any model better — augment any AI agent via MCP or shell with hybrid RAG search. Take a free model or even a frontier model and make it leagues better at your data
  • Talks to everything — PDFs, Office docs, spreadsheets, images (OCR), ebooks, and 150+ code languages via tree-sitter
  • Bring your own GUI — built-in REST API means you can integrate document search into whatever tool you already use. No extra app needed (see Obsidian plugin)
  • Per-project databaseslilbee init creates a .lilbee/ directory (like .git/) so each project gets its own isolated index

Add files (lilbee add), then search or ask questions. Once indexed, search works without Ollama — agents use their own LLM to reason over the retrieved chunks.

Demos

Click the ▶ arrows below to expand each demo.

AI agent — lilbee search vs web search (detailed analysis)

opencode + minimax-m2.5-free, single prompt, no follow-ups. The Godot 4.4 XML class reference (917 files) is indexed in lilbee. The baseline uses Exa AI code search instead.

⚠️ Caution: minimax-m2.5-free is a cloud model — retrieved chunks are sent to an external API. Use a local model if your documents are private.

API hallucinations Lines
With lilbee (code · config) 0 261
Without lilbee (code · config) 4 (~22% error rate) 213
With lilbee — all Godot API calls match the class reference

With lilbee MCP

Without lilbee — 4 hallucinated APIs (details)

Without lilbee

If you spot issues with these benchmarks, please open an issue.

Vision OCR

Scanned PDF → searchable knowledge base

A scanned 1998 Star Wars: X-Wing Collector's Edition manual indexed with vision OCR (LightOnOCR-2), then queried in lilbee's interactive chat (qwen3-coder:30b, fully local). Three questions about dev team credits, energy management, and starfighter speeds — all answered from the OCR'd content.

Vision OCR demo

See benchmarks, test documents, and sample output for model comparisons.

One-shot question from OCR'd content

The scanned Star Wars: X-Wing Collector's Edition guide, queried with a single lilbee ask command — no interactive chat needed.

Top speed question

Standalone

Interactive local offline chat

[!NOTE] Entirely local on a 2021 M1 Pro with 32 GB RAM.

Model switching via tab completion, then a Q&A grounded in an indexed PDF.

Interactive local offline chat

Code index and search

Code search

Add a codebase and search with natural language. Tree-sitter provides AST-aware chunking.

JSON output

JSON output

Structured JSON output for agents and scripts.

Hardware requirements

When used standalone, lilbee runs entirely on your machine — chat with your documents privately, no cloud required.

Resource Minimum Recommended
RAM 8 GB 16–32 GB
GPU / Accelerator Apple Metal (M-series), NVIDIA GPU (6+ GB VRAM)
Disk 2 GB (models + data) 10+ GB if using multiple models
CPU Any modern x86_64 / ARM64

Ollama handles inference and uses Metal on macOS or CUDA on Linux/Windows. Without a GPU, models fall back to CPU — usable for embedding but slow for chat.

Install

Prerequisites

  • Python 3.11+
  • Ollama — the embedding model (nomic-embed-text) is auto-pulled on first sync. If no chat model is installed, lilbee prompts you to pick and download one.
  • Optional (for scanned PDF/image OCR): Tesseract (brew install tesseract / apt install tesseract-ocr) or an Ollama vision model (recommended for better quality — see vision OCR)

First-time download: If you're new to Ollama, expect the first run to take a while — models are large files that need to be downloaded once. For example, qwen3:8b is ~5 GB and the embedding model nomic-embed-text is ~274 MB. After the initial download, models are cached locally and load in seconds. You can check what you have installed with ollama list.

Install

pip install lilbee        # or: uv tool install lilbee

Development (run from source)

git clone https://github.com/tobocop2/lilbee && cd lilbee
uv sync
uv run lilbee

Quick start

See the usage guide.

Agent integration

lilbee can serve as a local retrieval backend for AI coding agents via MCP or JSON CLI. See docs/agent-integration.md for setup and usage.

HTTP Server

lilbee includes a REST API server so you can integrate document search into any GUI or tool:

lilbee serve                          # start on a random port (written to <data_dir>/server.port)
lilbee serve --port 8080              # or pick a fixed port

Endpoints include /api/search, /api/ask, /api/chat (with streaming SSE variants), /api/sync, /api/add, and /api/models. When the server is running, interactive API docs are available at /schema/redoc. See the API reference for the full OpenAPI schema.

Interactive chat

Running lilbee or lilbee chat enters an interactive REPL with conversation history, streaming responses, and slash commands:

Command Description
/status Show indexed documents and config
/add [path] Add a file or directory (tab-completes paths)
/model [name] Switch chat model — no args opens a curated picker; with a name, switches directly or prompts to download if not installed (tab-completes installed models)
/vision [name|off] Switch vision OCR model — no args opens a curated picker; with a name, prompts to download if not installed; off disables (tab-completes catalog models)
/settings Show all current configuration values
/set <key> <value> Change a setting (e.g. /set temperature 0.7)
/version Show lilbee version
/reset Delete all documents and data (asks for confirmation)
/help Show available commands
/quit Exit chat

Slash commands and paths tab-complete. A spinner shows while waiting for the first token from the LLM. Background sync progress appears in the toolbar without interrupting the conversation.

Supported formats

Text extraction powered by Kreuzberg, code chunking by tree-sitter. Structured formats (XML, JSON, CSV) get embedding-friendly preprocessing. This list is not exhaustive — Kreuzberg supports additional formats beyond what's listed here.

Format Extensions Requires
PDF .pdf
Scanned PDF .pdf (no extractable text) Tesseract (auto, plain text) or Ollama vision model (recommended — preserves tables, headings, and layout as markdown)
Office .docx, .xlsx, .pptx
eBook .epub
Images (OCR) .png, .jpg, .jpeg, .tiff, .bmp, .webp Tesseract
Data .csv, .tsv
Structured .xml, .json, .jsonl, .yaml, .yml
Text .md, .txt, .html, .rst
Code .py, .js, .ts, .go, .rs, .java and 150+ more via tree-sitter (AST-aware chunking)

See the usage guide for OCR setup and model benchmarks.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lilbee-0.5.3.tar.gz (14.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lilbee-0.5.3-py3-none-any.whl (68.3 kB view details)

Uploaded Python 3

File details

Details for the file lilbee-0.5.3.tar.gz.

File metadata

  • Download URL: lilbee-0.5.3.tar.gz
  • Upload date:
  • Size: 14.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lilbee-0.5.3.tar.gz
Algorithm Hash digest
SHA256 cd09135305d883f440c651a33724d3a6008034a88e849b17fb2c4f9fe17ae743
MD5 73f0f10d5699ff1414ad6ae65305222d
BLAKE2b-256 7dcd4e9491613ad8c1ba96daefb67692fc6086b38483495917f7d81aacc6e3d3

See more details on using hashes here.

Provenance

The following attestation bundles were made for lilbee-0.5.3.tar.gz:

Publisher: publish.yml on tobocop2/lilbee

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file lilbee-0.5.3-py3-none-any.whl.

File metadata

  • Download URL: lilbee-0.5.3-py3-none-any.whl
  • Upload date:
  • Size: 68.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lilbee-0.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 917ae8d88f41dd7bbefe239650c2342dbc17e48a0bb38a4df623dcf4e395e7d8
MD5 05a5b525e1b4b9025a91427084870075
BLAKE2b-256 3196b3c48e785e321ce9bf5fcce35750346b96c40469166aa3c90218aa00b726

See more details on using hashes here.

Provenance

The following attestation bundles were made for lilbee-0.5.3-py3-none-any.whl:

Publisher: publish.yml on tobocop2/lilbee

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page