Skip to main content

Opinionated agentic RAG powered by LanceDB, Pydantic AI, and Docling

Project description

Haiku RAG

Agentic RAG built on LanceDB, Pydantic AI, and Docling.

Features

  • Hybrid search — Vector + full-text with Reciprocal Rank Fusion
  • Reranking — MxBAI, Cohere, Zero Entropy, or vLLM
  • Question answering — QA agents with citations (page numbers, section headings)
  • Research agents — Multi-agent workflows via pydantic-graph: plan, search, evaluate, synthesize
  • Document structure — Stores full DoclingDocument, enabling structure-aware context expansion
  • Visual grounding — View chunks highlighted on original page images
  • Time travel — Query the database at any historical point with --before
  • Multiple providers — Embeddings: Ollama, OpenAI, VoyageAI, LM Studio, vLLM. QA/Research: any model supported by Pydantic AI
  • Local-first — Embedded LanceDB, no servers required. Also supports S3, GCS, Azure, and LanceDB Cloud
  • MCP server — Expose as tools for AI assistants (Claude Desktop, etc.)
  • File monitoring — Watch directories and auto-index on changes
  • Inspector — TUI for browsing documents, chunks, and search results
  • CLI & Python API — Full functionality from command line or code

Installation

Python 3.12 or newer required

Full Package (Recommended)

uv pip install haiku.rag

Includes all features: document processing, all embedding providers, and rerankers.

Slim Package (Minimal Dependencies)

uv pip install haiku.rag-slim

Install only the extras you need. See the Installation documentation for available options

Quick Start

# Index a PDF
haiku-rag add-src paper.pdf

# Search
haiku-rag search "attention mechanism"

# Ask questions with citations
haiku-rag ask "What datasets were used for evaluation?" --cite

# Deep QA — decomposes complex questions into sub-queries
haiku-rag ask "How does the proposed method compare to the baseline on MMLU?" --deep

# Research mode — iterative planning and search
haiku-rag research "What are the limitations of the approach?" --verbose

# Interactive research — human-in-the-loop with decision points
haiku-rag research "Compare the approaches discussed" --interactive

# Watch a directory for changes
haiku-rag serve --monitor

See Configuration for customization options.

Python API

from haiku.rag.client import HaikuRAG

async with HaikuRAG("research.lancedb", create=True) as rag:
    # Index documents
    await rag.create_document_from_source("paper.pdf")
    await rag.create_document_from_source("https://arxiv.org/pdf/1706.03762")

    # Search — returns chunks with provenance
    results = await rag.search("self-attention")
    for result in results:
        print(f"{result.score:.2f} | p.{result.page_numbers} | {result.content[:100]}")

    # QA with citations
    answer, citations = await rag.ask("What is the complexity of self-attention?")
    print(answer)
    for cite in citations:
        print(f"  [{cite.chunk_id}] p.{cite.page_numbers}: {cite.content[:80]}")

For research agents and streaming with AG-UI, see the Agents docs.

MCP Server

Use with AI assistants like Claude Desktop:

haiku-rag serve --mcp --stdio

Add to your Claude Desktop configuration:

{
  "mcpServers": {
    "haiku-rag": {
      "command": "haiku-rag",
      "args": ["serve", "--mcp", "--stdio"]
    }
  }
}

Provides tools for document management, search, QA, and research directly in your AI assistant.

Examples

See the examples directory for working examples:

  • Interactive Research Assistant - Full-stack research assistant with Pydantic AI and AG-UI featuring human-in-the-loop approval and real-time state synchronization
  • Docker Setup - Complete Docker deployment with file monitoring and MCP server
  • A2A Server - Self-contained A2A protocol server package with conversational agent interface

Documentation

Full documentation at: https://ggozad.github.io/haiku.rag/

mcp-name: io.github.ggozad/haiku-rag

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

haiku_rag-0.23.0.tar.gz (277.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

haiku_rag-0.23.0-py3-none-any.whl (6.7 kB view details)

Uploaded Python 3

File details

Details for the file haiku_rag-0.23.0.tar.gz.

File metadata

  • Download URL: haiku_rag-0.23.0.tar.gz
  • Upload date:
  • Size: 277.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for haiku_rag-0.23.0.tar.gz
Algorithm Hash digest
SHA256 ecffb95697b0759e899d6f3a6608eefe3282a047eac548a87eb498741a36bcf4
MD5 478bfa6199b192320084f88db26d76c9
BLAKE2b-256 dd1dfdfe261225bfadf28d2f3b07b3d92034af22410208e1c6146ddcd01c466b

See more details on using hashes here.

File details

Details for the file haiku_rag-0.23.0-py3-none-any.whl.

File metadata

  • Download URL: haiku_rag-0.23.0-py3-none-any.whl
  • Upload date:
  • Size: 6.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for haiku_rag-0.23.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4c8f7312f685b30ae10739c4d5cecc32fe450528ac813dbf92f9bae1aae540ff
MD5 fd3fe941068c62eac70ca8e595dae590
BLAKE2b-256 05cdc65bd7fdc347344f1cf0d4ba6d6c1b9d86c6b6ebe0e3137c8ba5f401b409

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page