Fully-offline semantic search over your local files — powered by Ollama
Project description
Install · Scenarios · Why? · How it works · Benchmarks · Docs site
Find anything on your machine.
Semantic search for code, PDFs, notes, and docs. Fully offline. No cloud. No telemetry. No subscription. Ask in plain English (or any of 100+ languages) and get the right file + line range in under a second.
$ skygrep "where does the auth token get refreshed?"
═══ auth/middleware.py:78-94 score 0.91 · python
async def renew_session(req: Request):
# swap the access cookie when the refresh JWT is still valid
if req.cookies.get("rt") and access_expired(req):
return await refresh_token(claims, key)
[0.5s · path=cosine-cheap · σ-gap=0.082 ≥ τ=0.005 (adaptive) → high-confidence early-exit · ✓ quality=BEST]
Install in 30 s → · How it works → · Benchmarks →
30 / 30 public-OSS recall · ~1 s warm queries · 100 % local · 14 releases shipped
Three ways people use it
🧠 Code by concept
Find code by what it does, not what it's called. The semantic
substrate (bge-m3) bridges your phrasing to the actual identifier
even when the function name uses different words.
$ skygrep "where does session refresh logic live?"
→ auth/middleware.py:78 · renew_session()
No
rghit for "session refresh"; cosine bridges torenew_sessionin 0.5 s.
📄 Cross-content
One query across code, PDFs, notes, and docs. Markdown, PDF, Word, plain text — all indexed via the same content-agnostic substrate. Your query searches all of them at once, ranked by semantic relevance.
$ skygrep "the design doc on rate limiter rewrite"
→ docs/rate-limiter-redesign.md · designs/q3-rewrite.pdf
Markdown link graph + PDF text-layer extraction in one cascade.
🌐 Multilingual · private
bge-m3 understands 100+ languages out of the box. Index,
retrieval, ranking, optional answer synthesis — all run locally
via Ollama. Zero network calls.
$ skygrep "我昨天写的 cascade 调度代码"
→ src/storage.py:847 · cascade_search()
Mixed Chinese / English query. Zero network. Audit-friendly.
Why skylakegrep?
Sized against four named alternatives, not generic categories.
How it works
Local Ollama + SQLite. Zero network calls. Zero subscription.
The same architecture handles every content type — code · PDFs ·
notes · markdown · any file you register an extractor for. The
LLM router classifies intent + scope + primary token on every
query; the cosine cascade uses bge-m3 (multilingual, 1024-d,
symmetric XLM-RoBERTa) with σ-adaptive early-exit. Two proactive
enhancers kick in when the cascade can't answer: filename_extend
extends the search to common home directories; recovery_progress_hint
surfaces live re-embed progress when the index is being rebuilt.
Install
# 1. install (Python 3.9+)
pip install skylakegrep
# 2. pull the local models (~3 GB, one time)
ollama pull bge-m3 qwen2.5:1.5b qwen2.5:3b
# 3. (one time) register skygrep with your LLM CLI of choice
skygrep setup # Claude Code · Codex · OpenCode · Gemini CLI · Cursor
# 4. ask anything, anywhere
skygrep "your question here"
That's it. The first query in a fresh project completes in under
a second via a ripgrep fallback while a background process
builds the semantic index. Every query after that uses the full
cascade with the local LLM kept warm in memory.
Performance
Public-OSS reproducible benchmark across three popular codebases (Django · React · Tokio · 30 hand-labelled questions, 10 each):
Honest reading:
rg's 100 % is a recall-ceiling baseline — it returns 20 M+ tokens per query (term-OR scan with 2-line context windows). Yes, the answer is in the dump; no, the agent has to read all of it to find it.- skygrep returns the right file ranked top-10 in 30 / 30 cases while emitting 60 × – 770 × less context for the agent's LLM round-trip downstream. That's the user-facing number.
- Reproduce:
git cloneDjango + React + Tokio at any commit, runbenchmarks/public_oss_bench.py. Numbers within ±5 %.
For the full bench protocol, per-task analysis, and worked
example (one query · 1,395 × token reduction), see
docs/parity-benchmarks.html.
What you can search
The retrieval substrate is content-agnostic by design. The
embedder, the cascade, and the reference graph all abstract over
"A references B" — not over any specific programming language or
file format. New content types plug in via a one-line
register_extractor() call.
from skylakegrep.src.reference_graph import register_extractor
def yaml_anchor_extractor(path):
"""Return list of (source, target) reference edges."""
...
register_extractor("yaml", [".yaml", ".yml"], yaml_anchor_extractor)
Command cheatsheet
The bare form — skygrep "<your question>" — covers ~95 % of
real-world use. No subcommand, no flags. The system auto-routes
(LLM router → find / rg / semantic cascade), auto-indexes on
first query, and auto-recovers when the embedder is upgraded.
Reading the per-query telemetry footer (0.2.2+)
Every search prints a one-line footer so you can see which retrieval path answered your query and why:
✓ 0.42s · quality=BEST
path : cosine-cheap (high-confidence early-exit)
router : llm → intent=mixed (0.83)
evidence : σ-gap=0.0820 ≥ τ=0.0050 (adaptive)
pool : 1 filename + 0 lexical · cascade
index : 20s ago · 36 files · L2 symbols + graph prior
Field guide:
path=—cosine-cheap/cosine-escalated-rerank/rg-only/cascade-skipped. The retrieval strategy this specific query took.σ-gap=… → reason— Bayesian-evidence proxy that drove the cascade decision. High σ-gap = top-K candidates well separated → cosine trusted, exit cheap. Low σ-gap = candidates tied → escalate to rerank.recovery=…(only when the recovery worker is active) — live progress + ETA for the in-progress re-embed.quality=BEST/DEGRADED-recovery— at-a-glance trust indicator.
Configuration
Set via environment variables. Defaults work — tune only when you need to. Grouped into three panels: Ollama setup, Indexing & rerank, Behavior toggles.
What's new
Recent releases (2026-05-05, in chronological ship order):
0.2.13— Privacy-only sweep: removed user-personal references from public release notes / docs / README. No code change.0.2.12—filename_extendmorphology fallback when LLM is unreachable. Plus the conversational session state plan.0.2.11— Second built-in proactive enhancer:recovery_progress_hint. PlusProactiveContextinfrastructure for future enhancers.0.2.10— Critical fix: the per-dirfindbudget bug that silenced proactive on the user's actual scenarios. End-to-end verified before tagging.0.2.9←0.2.7— Three iterations on the proactive framework's gate logic, recorded in the Principle 1 receipts table. Each was a Principle-1 lapse the user caught.0.2.6— LLM-driven scope classification replaces the keyword_METADATA_TOKENSlist. Principle 1 ✓ shipped.0.2.0—bge-m3substrate · content-agnostic reference graph registry · σ-adaptive cascade · 30 / 30 public-OSS recall (was 28 / 30).
Project principles
Architecture rules every contributor (human or AI agent) should
follow. Recorded in
docs/principles.html. Loaded into Claude
sessions automatically via CLAUDE.md.
- Understanding > Enumeration — substrate (LLM / embedder / registry) over hardcoded lists. Receipts table tracks 5 past lapses.
- Substrate before scaffolding — upgrade the underlying model before layering priors on top.
- Latency / quality / correctness — in that priority order.
- Public surfaces sync at every release — the 8-surface
checklist in
docs/releasing.html. - Honest evaluation over hopeful claims — name the bench, show the numbers, don't combine across benches.
- Proactive over Passive — when the cascade can't answer, try bounded extra work in parallel rather than shrug.
Development
git clone https://github.com/danielchen26/skylakegrep.git
cd skylakegrep
python3 -m venv .venv
source .venv/bin/activate
pip install -e .[rerank]
# Verify
.venv/bin/python -m pytest -q tests/ # 201 / 201 should pass
The release protocol is documented in
docs/releasing.html. Every release must
sync 8 public-facing surfaces (PyPI, GitHub Release, README,
GitHub Pages, plan docs, principles, version bump, tag) in a
specific order.
License
PolyForm Noncommercial 1.0.0. Personal · academic · research · hobby use is fully permitted. Commercial use requires a separate license — contact chentianchi@gmail.com.
Acknowledgments
Built on the shoulders of:
- Ollama — local model serving
- bge-m3 — multilingual embedder (BAAI)
- qwen2.5 — local LLM family for routing + answer synthesis
- tree-sitter — symbol-aware chunking
- SQLite — durable index storage
- pypdf · python-docx — binary content extraction
- Pygments — syntax highlighting in the rendered terminal output
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file skylakegrep-0.3.0.tar.gz.
File metadata
- Download URL: skylakegrep-0.3.0.tar.gz
- Upload date:
- Size: 182.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
22965f74869511eef1dbcbe346fb6ee4f877b2ff830a78e07aae65ac8f79e41c
|
|
| MD5 |
f0a14a495b5af6eb40d8de48e40f6862
|
|
| BLAKE2b-256 |
73552bd1242afd4c3e5952359500c9310fff0f9c7c2b6ccae9bc8c8fbf631616
|
File details
Details for the file skylakegrep-0.3.0-py3-none-any.whl.
File metadata
- Download URL: skylakegrep-0.3.0-py3-none-any.whl
- Upload date:
- Size: 154.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
543dbdc14abdb4974ba343e33acdc07834b880a3823d38539b75f671c4188a31
|
|
| MD5 |
39b72760d690fc08842cbc23a0a86e78
|
|
| BLAKE2b-256 |
1222bceac640c969dd5438c3364e239207e27807c9c9c5add5776321ca660183
|