Skip to main content

Structure-aware document retrieval. FTS5/BM25 keyword matching over document trees.

Project description

๐ŸŒEnglish | ๐Ÿ‡จ๐Ÿ‡ณไธญๆ–‡


TreeSearch: Structure-Aware Document Retrieval

PyPI version Downloads License Apache 2.0 python_version GitHub issues Wechat Group

TreeSearch is a structure-aware document retrieval library. No vector embeddings. No chunk splitting. SQLite FTS5 keyword matching over document tree structures. Supports Markdown, plain text, code files (Python AST + regex, Java/Go/JS/C++ etc.), HTML, XML, JSON, CSV, PDF, and DOCX.

Millisecond-latency search over tens of thousands of documents and large codebases, with structure preservation.

Installation

pip install -U pytreesearch

Quick Start

from treesearch import TreeSearch

# Just pass directories โ€” auto-discovers all supported files
ts = TreeSearch("project_root/", "docs/")
results = ts.search("How does auth work?")
for doc in results["documents"]:
    for node in doc["nodes"]:
        print(f"[{node['score']:.2f}] {node['title']}")
        print(f"  {node['text'][:200]}")

Directories are walked recursively with smart defaults:

  • Auto-discovers .py, .md, .json, .jsonl, .java, .go, .ts, .pdf, .docx, etc.
  • Skips .git, node_modules, __pycache__, .venv, dist, build, etc.
  • Respects .gitignore when pathspec is installed(pip install pathspec)
  • Safety cap of 10,000 files per directory (configurable via max_files)

You can also mix directories, files, and glob patterns freely:

# All three input types work together
ts = TreeSearch("src/", "docs/*.md", "README.md")
results = ts.search("authentication")

In-Memory Mode

For quick searches, scripts, or ephemeral use cases, set db_path=None to skip writing any .db file to disk:

# In-memory mode โ€” no index.db file, all indexes kept in memory
ts = TreeSearch("docs/", db_path=None)
results = ts.search("voice calls")

Performance is excellent even with thousands of documents (5,000 docs < 10ms). The trade-off is that indexes are lost when the process exits. For persistent, incremental indexing, use the default db_path or set it to a file path.

Why TreeSearch?

Traditional RAG systems split documents into fixed-size chunks and retrieve by vector similarity. This destroys document structure, loses heading hierarchy, and misses reasoning-dependent queries.

TreeSearch takes a fundamentally different approach โ€” parse documents into tree structures based on their natural heading hierarchy, then search with FTS5 keyword matching (zero-cost, no API key needed).

Traditional RAG TreeSearch
Preprocessing Chunk splitting + embedding Parse headings โ†’ build tree
Retrieval Vector similarity search FTS5 keyword matching (no LLM needed)
Multi-doc Needs vector DB for routing FTS5 cross-doc scoring
Structure Lost after chunking Fully preserved as tree hierarchy
Dependencies Vector DB + embedding model SQLite only (no embedding, no vector DB)

Key Advantages

  • No vector embeddings โ€” No embedding model to train, deploy, or pay for
  • No chunk splitting โ€” Documents retain their natural heading structure
  • No vector DB โ€” No Pinecone, Milvus, or Chroma to manage
  • Tree-aware retrieval โ€” Heading hierarchy guides search, not arbitrary chunk boundaries
  • SQLite FTS5 engine โ€” Persistent inverted index with WAL mode, incremental updates, CJK support, and SQL aggregation

Features

  • Smart directory discovery โ€” ts.index("src/") recursively discovers all supported files; skips .git/node_modules/__pycache__; respects .gitignore
  • FTS5 search โ€” Zero LLM calls, millisecond-level FTS5 keyword matching, no API key needed
  • SQLite FTS5 engine โ€” Persistent inverted index, WAL mode, incremental updates, MD structure-aware columns (title/summary/body/code/front_matter), column weighting, CJK tokenization
  • Tree-structured indexing โ€” Markdown, plain text, code files (Python AST + regex, Java/Go/JS/C++/PHP), HTML, XML, JSON, CSV, PDF, and DOCX are parsed into hierarchical trees
  • Ripgrep-accelerated GrepFilter โ€” Auto-uses system rg for fast line-level matching with transparent native Python fallback; hit-count-based scoring ranks multi-match nodes higher
  • Parser registry โ€” Extensible ParserRegistry with built-in parsers auto-registered; custom parsers via ParserRegistry.register()
  • Python AST parsing โ€” ast module extracts classes/functions with full signatures (parameters, return types); regex fallback for syntax errors
  • PDF/DOCX/HTML parsers โ€” Optional parsers via PyMuPDF, python-docx, beautifulsoup4 (install with pip install pytreesearch[all])
  • GrepFilter โ€” Exact literal/regex matching for precise symbol and keyword search across tree nodes
  • Source-type routing โ€” Automatic pre-filter selection based on file type (e.g., code files use GrepFilter + FTS5)
  • Chinese + English โ€” Built-in jieba tokenization for Chinese and regex tokenization for English
  • Batch indexing โ€” build_index() supports glob patterns, files, and directories for concurrent multi-file processing
  • Async-first โ€” All core functions are async with sync wrappers available
  • Config-driven defaults โ€” search() and build_index() read defaults from get_config(), overridable per-call
  • CLI included โ€” treesearch "query" path/ for instant search; treesearch index and treesearch search for advanced workflows

FTS5 Standalone

from treesearch import FTS5Index, Document, load_index

data = load_index("indexes/my_doc.json")
doc = Document(doc_id="doc1", doc_name=data["doc_name"], structure=data["structure"])

fts = FTS5Index(db_path="indexes/fts.db")  # persistent, or omit for in-memory
fts.index_documents([doc])

# Simple keyword search
results = fts.search("authentication config", top_k=5)
for r in results:
    print(f"[{r['fts_score']:.4f}] {r['title']}")

# Advanced FTS5 query syntax
results = fts.search("auth", fts_expression='title:auth AND body:config', top_k=5)

# Per-document aggregation
agg = fts.search_with_aggregation("authentication", group_by_doc=True)
for doc_agg in agg:
    print(f"{doc_agg['doc_name']}: {doc_agg['hit_count']} hits, best={doc_agg['best_score']:.4f}")

CLI

# Default mode: one command does everything (lazy index + search)
treesearch "How does auth work?" src/ docs/
treesearch "configure Redis" project/

# With options
treesearch "auth" src/ --max-nodes 10 --db ./my_index.db

# Advanced: build index separately (for large codebases)
treesearch index --paths src/ docs/ --add-description
treesearch index --paths "docs/*.md" "src/**/*.py" --add-description

# Advanced: search a pre-built index
treesearch search --index_dir ./indexes/ --query "How does auth work?"

How It Works

Input Documents (MD/TXT/Code/JSON/CSV/HTML/XML/PDF/DOCX)
        โ”‚
        โ–ผ
   โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
   โ”‚  Indexer  โ”‚  ParserRegistry dispatch โ†’ parse structure โ†’ build tree โ†’ generate summaries
   โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜    (build_index supports glob for batch processing)
        โ”‚  JSON index files
        โ–ผ
   โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
   โ”‚  search   โ”‚  FTS5/Grep pre-filter โ†’ cross-doc scoring โ†’ ranked results
   โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜
        โ”‚  dict result
        โ–ผ
  Ranked nodes with scores and text

FTS5 Pre-Scoring: FTS5Index uses SQLite FTS5 inverted index with MD structure-aware columns (title/summary/body/code/front_matter) and column weighting for fast scoring. Instant results, no LLM needed.

Source-Type Routing: For code files, GrepFilter + FTS5 are combined automatically for precise symbol matching. The pre-filter is selected based on file type via PREFILTER_ROUTING.

Use Cases

Use Case 1: Technical Documentation QA (Best Scenario)

Problem: Your company has 100+ technical docs (API docs, design docs, RFCs), and traditional search can't find the right answers.

from treesearch import build_index, search

# 1. Build index โ€” just pass directories (run once)
docs = await build_index(
    paths=["docs/", "specs/"],
    output_dir="./indexes"
)

# 2. Search โ€” millisecond response
result = await search(
    query="How to configure Redis cluster?",
    documents=docs,
)

# 3. Results โ€” complete sections, not fragments
for doc in result["documents"]:
    print(f"Doc: {doc['doc_name']}")
    for node in doc["nodes"]:
        print(f"  Section: {node['title']}")
        print(f"  Content: {node['text'][:200]}...")

Why better than traditional RAG?

  • Finds complete sections, not fragments
  • Includes section titles as context anchors
  • Supports hierarchical navigation (parent/child sections)

Use Case 2: Codebase Search

Problem: Want to search for "login-related classes and methods" in a large codebase, but grep only finds lines without structure.

# Index entire directories โ€” auto-discovers .py, .java, .go, etc.
docs = await build_index(
    paths=["src/", "lib/"],
    output_dir="./code_indexes"
)

# Search โ€” auto-detects code files, uses AST parsing + GrepFilter (ripgrep-accelerated)
result = await search(
    query="user login authentication",
    documents=docs,
)

# Results example:
# Doc: auth_service.py
#   class UserAuthenticator
#     def login(username, password)
#     def verify_token(token)

Why better than grep/IDE search?

  • Semantic understanding: Not just keyword matching, understands "login" = "authentication"
  • Structure-aware: Finds complete classes/methods with docstrings
  • Precise location: Directly locates to code line numbers

Use Case 3: Long Document QA (Papers/Books)

Problem: Have a 50-page paper, want to ask "What experimental methods are mentioned in Chapter 3?"

docs = await build_index(paths=["paper.pdf"])

result = await search(
    query="experimental methodology",
    documents=docs,
)

# Automatically finds "3.2 Experimental Design" section content

Why better than Ctrl+F?

  • Semantic matching: Finds synonymous paragraphs for "experimental methods"
  • Section location: Tells you which chapter and section
  • Scalable to multi-doc: Search 10 papers simultaneously

Real Case Comparison

Case: Find "How to request GPU machines" in company docs

Traditional way (Ctrl+F):

Search "GPU" โ†’ Found 47 matches โ†’ Manual review โ†’ 10 minutes

TreeSearch way:

result = await search("How to request GPU machines", docs)
# Directly returns "Resource Guide > GPU Request Process" section
# Time: < 100ms

Efficiency gain: 100x

Comparison with Other Solutions

Solution Pros Cons Best For
Ctrl+F Simple No semantic understanding, fragmented results Known keywords
Traditional RAG Good semantic understanding Chunking destroys context, slow response Plain text QA
Vector DB Similarity search Requires embedding preprocessing, high cost Large-scale semantic retrieval
TreeSearch Preserves structure + Fast + Zero cost Requires structured documents Tech docs/Codebase

Benchmark

Document Retrieval (QASPER)

Evaluated on QASPER dataset (50 queries, 18 academic papers):

Metric Embedding (zhipu-embedding-3) TreeSearch FTS5
MRR 0.4235 0.3863
Precision@1 0.2553 0.1915
Recall@5 0.4259 0.5514
NDCG@3 0.3053 0.2836
F1@3 0.2196 0.2207
Index Time 22.8s 0.1s
Avg Query Time 199.7ms 0.9ms

Key Findings:

  • Embedding MRR +9.6% โ€” Better semantic understanding for natural language queries
  • TreeSearch Recall@5 +29% โ€” Structure preservation helps recall more relevant content
  • TreeSearch 217x faster queries โ€” Sub-millisecond vs hundreds of milliseconds
  • TreeSearch 228x faster indexing โ€” No embedding API calls needed

Code Retrieval (CodeSearchNet)

Evaluated on CodeSearchNet dataset (50 queries, 500 Python corpus):

Metric Embedding (zhipu-embedding-3) TreeSearch FTS5
MRR 0.8483 0.8433
Precision@1 0.7800 0.8000
Recall@5 0.9400 0.9000
Hit@1 0.7800 0.8000
Index Time 33.8s 3.5s
Avg Query Time 179.0ms 2.4ms

Key Findings:

  • TreeSearch MRR nearly matches Embedding (0.84 vs 0.85) โ€” BM25 excels on code with high lexical overlap
  • TreeSearch Precision@1 wins (0.80 vs 0.78) โ€” Exact keyword matching is strong for code search
  • TreeSearch 74x faster queries โ€” Milliseconds vs hundreds of milliseconds
  • TreeSearch 10x faster indexing โ€” No embedding API calls needed

Summary

TreeSearch is not meant to replace embedding-based retrieval, but to provide a zero-cost, ultra-fast alternative. For code search where queries and code share vocabulary, TreeSearch performs on par with embeddings. For natural language queries over documents, embeddings have a modest edge in precision while TreeSearch excels in recall.

Run the benchmarks yourself:

# Document retrieval (QASPER)
python examples/benchmark/qasper_benchmark.py --max-samples 50 --max-papers 20 --with-embedding

# Code retrieval (CodeSearchNet)
python examples/benchmark/codesearchnet_benchmark.py --max-samples 50 --max-corpus 500 --with-embedding

Documentation

Community

  • GitHub Issues โ€” Submit an issue
  • WeChat Group โ€” Add WeChat ID xuming624, note "nlp", to join the tech group

Citation

If you use TreeSearch in your research, please cite:

@software{xu2026treesearch,
  author = {Xu, Ming},
  title = {TreeSearch: Structure-Aware Document Retrieval Without Embeddings},
  year = {2026},
  publisher = {GitHub},
  url = {https://github.com/shibing624/TreeSearch}
}

License

Apache License 2.0

Contributing

Contributions are welcome! Please submit a Pull Request.

Acknowledgements

  • SQLite FTS5 โ€” The full-text search engine powering TreeSearch
  • VectifyAI/PageIndex โ€” Inspiration for structure-aware indexing and retrieval

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytreesearch-0.6.4.tar.gz (82.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytreesearch-0.6.4-py3-none-any.whl (75.4 kB view details)

Uploaded Python 3

File details

Details for the file pytreesearch-0.6.4.tar.gz.

File metadata

  • Download URL: pytreesearch-0.6.4.tar.gz
  • Upload date:
  • Size: 82.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for pytreesearch-0.6.4.tar.gz
Algorithm Hash digest
SHA256 a152ce829c681580abd434f1d4686f65f2c22fbda099ecdf31f0b63f1cbc398e
MD5 641e06794c2dcaff99f7885734fe7b62
BLAKE2b-256 be10042c89c653694010a04bd506ad36a2a368c3f6650e6ea6aa1637c68ed56d

See more details on using hashes here.

File details

Details for the file pytreesearch-0.6.4-py3-none-any.whl.

File metadata

  • Download URL: pytreesearch-0.6.4-py3-none-any.whl
  • Upload date:
  • Size: 75.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for pytreesearch-0.6.4-py3-none-any.whl
Algorithm Hash digest
SHA256 d51919c02a430b37bd4aea161423e6c9fc138a40580eeb99aaa13fc9c0057e0d
MD5 bba1c5a94f577dc32f5851a1998fe427
BLAKE2b-256 bba1cf8d03fc74d94db31e4febde85c0ce3025d42b07b0112e829e5f36f5bd7c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page