Extract searchable knowledge from any document. Expose it to LLMs via MCP.
Project description
quarry-mcp
Extract searchable knowledge from any document. Expose it to LLMs via MCP.
Quarry ingests PDFs, images, text files, and raw text into a local vector database, then serves semantic search over that content through the Model Context Protocol. Point Claude Code or Claude Desktop at your documents and ask questions.
Why Quarry?
If your documents are already machine-readable text (TXT, Markdown, DOCX), mcp-local-rag is a solid zero-config option — one npx command and you're searching.
Quarry exists for documents that aren't text yet:
- Scanned PDFs — Board packs, legal filings, archival records. No embedded text, just page images. Quarry classifies each page, routes image pages through AWS Textract OCR, and extracts text from the rest.
- Mixed-format PDFs — Some pages are text, some are scans. Quarry handles both in a single pipeline.
- Images — Photos of whiteboards, receipts, handwritten notes. PNG, JPG, TIFF (multi-page), BMP, WebP.
- Text files — TXT, Markdown, LaTeX, DOCX. No OCR needed, straight to chunking.
- Raw text — Paste content directly via
ingest_text. Use this from Claude Desktop for uploaded files.
Quarry also preserves full page text alongside chunks, so LLMs can reference surrounding context when a search hit lands mid-page.
Features
- PDF ingestion with automatic text/image classification per page
- Image ingestion — PNG, JPG, TIFF (multi-page), BMP, WebP via Textract OCR
- Text file ingestion — TXT, Markdown, LaTeX, DOCX
- Raw text ingestion — ingest content directly without a file on disk
- OCR via AWS Textract for scanned and image-based documents
- Text extraction via PyMuPDF for text-based PDF pages
- Sentence-aware chunking with configurable overlap
- Local vector embeddings using snowflake-arctic-embed-m-v1.5 (768-dim)
- LanceDB for fast, local vector storage (no external database)
- Directory registration and incremental sync — register directories, detect new/changed/deleted files via mtime+size, re-index in parallel
- MCP server with 13 tools:
search_documents,ingest,ingest_text,get_documents,get_page,delete_document,delete_collection,list_collections,register_directory,deregister_directory,sync_all_registrations,list_registrations,status - CLI for ingestion, search, document management, directory registration, and sync
- Full page text preserved alongside chunks for LLM reference
Quick Start
pip install quarry-mcp
# Set up data directory, download embedding model, configure MCP clients
quarry install
# Check everything is working
quarry doctor
# Configure AWS credentials (required for OCR)
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
export AWS_DEFAULT_REGION=us-east-1
# Ingest a PDF
quarry ingest /path/to/document.pdf
# Search
quarry search "revenue growth in 2024"
# List indexed documents
quarry list
Installation
pip install quarry-mcp
quarry install
quarry install creates the data directory (~/.quarry/data/lancedb/), downloads the embedding model (~500MB), and configures MCP for Claude Code and Claude Desktop.
Run quarry doctor to verify your environment:
✓ Python version: 3.13.1
✓ Data directory: /Users/you/.quarry/data/lancedb
✓ AWS credentials: AKIA****YMUH (via shared-credentials-file)
✓ Embedding model: snowflake-arctic-embed-m-v1.5 cached
✓ Core imports: 5 modules OK
AWS Setup
Quarry uses AWS Textract for OCR. Your IAM user needs:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"textract:DetectDocumentText",
"textract:StartDocumentTextDetection",
"textract:GetDocumentTextDetection"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
"Resource": "arn:aws:s3:::your-bucket/*"
}
]
}
Set your S3 bucket:
export S3_BUCKET=your-bucket-name
Usage
MCP Server
quarry install configures both Claude Code and Claude Desktop automatically. To configure manually:
Claude Code:
claude mcp add quarry -- uvx --from quarry-mcp quarry mcp
Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"quarry": {
"command": "/path/to/uvx",
"args": ["--from", "quarry-mcp", "quarry", "mcp"]
}
}
}
Use the absolute path to uvx for Desktop (e.g. /opt/homebrew/bin/uvx) since Desktop has a limited PATH. quarry install resolves this automatically.
MCP Tools
| Tool | Description |
|---|---|
search_documents |
Semantic search across all indexed documents |
ingest |
OCR and index a file (PDF, image, TXT, MD, TEX, DOCX) |
ingest_text |
Index raw text content directly (for uploads or pasted text) |
get_documents |
List all indexed documents with metadata |
get_page |
Retrieve full text for a specific page |
delete_document |
Remove a document and all its chunks |
delete_collection |
Remove all documents in a collection |
list_collections |
List all collections with document and chunk counts |
register_directory |
Register a directory for incremental sync |
deregister_directory |
Remove a directory registration |
sync_all_registrations |
Sync all registered directories (ingest new/changed, remove deleted) |
list_registrations |
List all registered directories |
status |
Database stats: document/chunk counts, registrations, storage size, model info |
Claude Desktop note: Uploaded files live in a container that Quarry cannot access. For uploaded files, use ingest_text with the extracted content. For files on your Mac, provide the local path to ingest.
CLI
# Ingest documents
quarry ingest report.pdf
quarry ingest whiteboard.jpg
quarry ingest notes.md
quarry ingest report.pdf --overwrite
# Search
quarry search "board governance structure"
quarry search "quarterly revenue" -n 5
# Manage documents
quarry list
quarry delete report.pdf
quarry collections
quarry delete-collection math
# Register directories for incremental sync
quarry register /path/to/courses/ml-101 --collection ml-101
quarry register /path/to/courses/stats-200
quarry registrations
quarry sync
quarry sync --workers 8
quarry deregister ml-101
# Environment
quarry doctor
quarry install
Multiple Indices
Run separate MCP server instances with different data directories:
{
"mcpServers": {
"legal-docs": {
"command": "/path/to/uvx",
"args": ["--from", "quarry-mcp", "quarry", "mcp"],
"env": { "LANCEDB_PATH": "/data/legal/lancedb" }
},
"financial-reports": {
"command": "/path/to/uvx",
"args": ["--from", "quarry-mcp", "quarry", "mcp"],
"env": { "LANCEDB_PATH": "/data/financial/lancedb" }
}
}
}
Configuration
All settings are configurable via environment variables:
| Variable | Default | Description |
|---|---|---|
AWS_ACCESS_KEY_ID |
AWS access key | |
AWS_SECRET_ACCESS_KEY |
AWS secret key | |
AWS_DEFAULT_REGION |
us-east-1 |
AWS region |
S3_BUCKET |
ocr-7f3a1b2e4c5d4e8f9a1b3c5d7e9f2a4b |
S3 bucket for Textract uploads |
LANCEDB_PATH |
~/.quarry/data/lancedb |
Path to LanceDB storage |
EMBEDDING_MODEL |
Snowflake/snowflake-arctic-embed-m-v1.5 |
HuggingFace embedding model |
CHUNK_MAX_CHARS |
1800 |
Target max characters per chunk (~450 tokens) |
CHUNK_OVERLAP_CHARS |
200 |
Character overlap between consecutive chunks |
TEXTRACT_POLL_INITIAL |
5.0 |
Initial seconds between Textract status checks |
TEXTRACT_POLL_MAX |
30.0 |
Maximum polling interval (exponential backoff, 1.5x) |
TEXTRACT_MAX_WAIT |
900 |
Maximum seconds to wait for Textract job |
REGISTRY_PATH |
~/.quarry/data/registry.db |
Path to directory registration SQLite database |
Architecture
Input
│
├─ PDF ─────────┬─ Text pages ──→ PyMuPDF extraction
│ └─ Image pages ─→ S3 → Textract async OCR → S3 cleanup
│
├─ Images ──────→ Textract sync OCR (BMP/WebP converted to PNG)
│ TIFF multi-page → S3 → Textract async OCR
│
├─ Text files ──→ Direct text extraction (TXT, MD, TEX, DOCX)
│
└─ Raw text ────→ ingest_text (from uploads, clipboard, etc.)
│
Sentence-aware chunking (with overlap)
│
snowflake-arctic-embed-m-v1.5
│
LanceDB (local vector store)
│
┌────────┴────────┐
│ │
MCP Server CLI
(stdio transport) (typer + rich)
Incremental Sync
│
Directory Registry (SQLite, WAL mode)
│
├─ register → track directory + collection mapping
├─ sync ────→ walk directory, compare mtime+size
│ ├─ new/changed → ThreadPoolExecutor → ingest pipeline
│ ├─ unchanged → skip
│ └─ deleted → remove from LanceDB + registry
└─ deregister → remove tracking + optionally clean data
Each chunk stores both its text fragment and the full page raw text, so LLMs can reference surrounding context when a search result is relevant.
Development
# Run all quality gates
uv run ruff check .
uv run ruff format --check .
uv run mypy src/quarry tests
uv run pytest
The project enforces strict mypy, comprehensive ruff rules, and requires all tests to pass before every commit.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file quarry_mcp-0.2.1.tar.gz.
File metadata
- Download URL: quarry_mcp-0.2.1.tar.gz
- Upload date:
- Size: 28.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
55cbddfaf0f8cfba7b8b75124ea60a8e80bec2c102c5396650d734b922b14c8d
|
|
| MD5 |
a89725734942576d806f6ac64afaf4af
|
|
| BLAKE2b-256 |
bb827b6a996ad5adc2203d904fb5eedf62a2093feb3936c03505ef635097b299
|
File details
Details for the file quarry_mcp-0.2.1-py3-none-any.whl.
File metadata
- Download URL: quarry_mcp-0.2.1-py3-none-any.whl
- Upload date:
- Size: 36.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
50593c571f3f2ef62912001b8cd7ed2314ad473ba01a0c1e786d42200a1d871d
|
|
| MD5 |
99d0351c1ee0aec7e48a1d4012e28c38
|
|
| BLAKE2b-256 |
676842399c52cb74f245edef5fa8fac073b1df2b4c3f7222844fbce5af08acad
|