Skip to main content

Local document memory with instant semantic search. Drop any file. Ask anything. Get an answer in under a second.

Project description

vstash

license PyPI python tests BEIR SciFact MCP latency

Local hybrid retrieval engine that beats ColBERTv2 on BEIR SciFact with BGE-small.

Single SQLite file. Zero cloud dependencies. Sub-25ms at 10K chunks.

pip install vstash
vstash add paper.pdf notes.md https://example.com/article
vstash search "what's the main argument about X?"

Retrieval Quality

Evaluated on the BEIR benchmark — the standard for comparing retrieval systems:

Dataset vstash (NDCG@10) ColBERTv2 BM25 Dense-only
SciFact (5K docs) 0.726 0.693 (+4.8%) 0.665 (+9.2%) 0.653 (+11.2%)
NFCorpus (3.6K docs) 0.359 0.344 (+4.4%) 0.325 (+10.5%) 0.338 (+6.2%)
SciDocs (25K docs) 0.194 0.154 (+26.2%) 0.158 (+23.0%) 0.163 (+19.2%)
FiQA (57K docs) 0.392 0.356 (+10.0%) 0.236 (+65.8%) 0.402 (−2.5%)
ArguAna (8.7K docs) 0.437 0.463 (−5.6%) 0.315 (+38.7%) 0.584 (−25.2%)

Same embedding model (BGE-small 384d) across all comparisons. Adaptive RRF improves all 5 datasets vs fixed weights. Results reproducible via python -m experiments.beir_benchmark.


Why vstash?

Layer Technology Why
Embeddings FastEmbed (ONNX Runtime) ~700 chunks/s, fully local, no server
Vector store sqlite-vec Single .db file, cosine similarity, zero deps
Keyword search FTS5 (SQLite) Exact matches, built into SQLite
Hybrid ranking Reciprocal Rank Fusion Semantic + keyword fusion — beats both alone
Scoring Frequency + temporal decay Results improve with usage, adaptive maturity gate
Dedup Intra-document MMR Diverse sections from long docs, not redundant chunks
Inference Local auto-detect / Cloud Ollama, LM Studio, Cerebras, OpenAI — all optional

Zero cloud required for search. Inference is optional.

What's new in v0.17

  • Dynamic chunk_sizeMemory(chunk_size=2048) or vstash add --chunk-size 2048. Per-document override without modifying config. Validation: overlap < chunk_size.
  • Adaptive RRF — IDF-based weight adjustment per query. Rare terms boost keyword search, common terms boost vector search. Long queries relax distance cutoff. Improves all 5 BEIR datasets.
  • 615 tests across 28 test modules (+ 6 benchmark regression tests).

What's new in v0.16

  • Local-first LLM auto-detect — New default backend "local" probes for Ollama, LM Studio, or any OpenAI-compatible server. Zero config needed — just start a local server and vstash ask works.
  • Search --explain — Diagnostic flag showing why each chunk ranked where it did: vector distance, FTS rank, RRF breakdown, frequency/decay scoring, and MMR penalty.
  • 612 tests across 27 test modules, all passing on Python 3.10–3.12.

What's new in v0.15

  • Unified DB resolution — CLI, MCP server, SDK, and reindex all share the same 6-level database resolution chain. Fixes bugs where different entry points could silently operate on different databases.
  • Federated context expansion--all-profiles now expands adjacent chunks per-store before merging, matching single-profile answer quality.
  • 592 tests across 27 test modules, all passing on Python 3.10–3.12.

What's new in v0.14

  • Document reconstructionget_document_chunks(path) retrieves all chunks for a document in order. Available in Python SDK and as MCP tool.

What's new in v0.13

  • Direct chunk retrievalget_chunk(id) and get_chunks(ids) for O(1) access to specific chunks by database ID. Enables downstream apps (spaced repetition, pinned references) to retrieve knowledge atoms without re-running search.

What's new in v0.12

  • Cross-session journalvstash journal save/recall/log/prune for lightweight agent memory across sessions. Append-only entries with semantic recall, project tags, and time-window filtering.
  • Transcript parsing — automatically extract structured journal entries from conversation logs.

What's new in v0.11

  • Multi-profile support — isolated databases per profile with vstash profile create/list/delete/active.
  • Federated search — query across all profiles simultaneously with cross-profile deduplication.
  • Profile resolution chain--profile flag → VSTASH_PROFILE env → default.

What's new in v0.10

  • Hybrid code splitting — 3-tier backend: tree-sitter AST → parso AST → regex fallback. Each backend gracefully degrades to the next.
  • 25+ languages — tree-sitter support for C, C++, Ruby, PHP, Swift, Kotlin, Scala, Lua, R, C#, Bash, Zig, Elixir, Erlang, Haskell, OCaml, Dart, Vue, Svelte (plus all previously supported).
  • Optional installpip install vstash[treesitter] for tree-sitter, or use parso (Python) + regex (6 languages) by default.

What's new in v0.9

  • Auto-generated titlesvstash remember generates descriptive slugs when no --title is provided.
  • Forget remembered textvstash forget "text://<title>" removes text ingested via remember.

What's new in v0.8

  • Multilingual embeddings — search in any language. Cross-lingual similarity improves ~40%.
  • vstash reindex — switch embedding models without re-ingesting.
  • Intra-document MMR dedup — replaces hard per-document dedup. Semantically diverse sections from the same long document now surface in results.

Earlier versions

  • v0.7 — Adaptive scoring maturity gate (γ), zero-cost cold start.
  • v0.6 — Distance-based relevance signal (F1=0.952), document dedup, context expansion (±1 chunks).

Install

pip install vstash

Or from source:

git clone https://github.com/stffns/vstash
cd vstash
pip install -e .

Quick Start

Search (free, no API key needed)

Semantic search works 100% locally — no inference backend required:

vstash add report.pdf
vstash add ~/docs/notes.md
vstash add https://arxiv.org/abs/2310.06825
vstash search "what is the proposed method?"

Ask (requires an LLM backend)

To get natural language answers, start any local LLM server — vstash auto-detects it:

# Option A: Ollama (auto-detected on port 11434)
ollama pull qwen3.5:9b

# Option B: LM Studio (auto-detected on port 1234 or 8080)
# Just load a model in the GUI

# Option C: Cloud backends (set in vstash.toml)
# inference.backend = "cerebras" + inference.model = "llama3.1-8b" + CEREBRAS_API_KEY env
# inference.backend = "openai"   + OPENAI_API_KEY env

Then:

vstash ask "summarize the key findings"
vstash chat   # interactive Q&A session

Python SDK

Use vstash as a building block in your own agents and pipelines:

from vstash import Memory

mem = Memory(project="my_agent")
mem.add("docs/spec.pdf")
mem.remember("OAuth uses PKCE for public clients", title="Auth Decision")

# Semantic search — free, no LLM
chunks = mem.search("deployment strategy", top_k=5)
for c in chunks:
    print(c.text, c.score, c.chunk_id)

# Direct chunk access by ID (O(1) lookup)
chunk = mem.get_chunk(chunks[0].chunk_id)

# Full document reconstruction from chunks
all_chunks = mem.get_document_chunks("docs/spec.pdf")

# Search + LLM answer
answer = mem.ask("What are the system requirements?")

# Cross-session journal
mem.journal_save("Decided to use FastAPI for the gateway")
entries = mem.journal_recall("architecture decisions")

# Management
mem.list()                # → list[DocumentInfo]
mem.stats()               # → StoreStats
mem.remove("docs/old.pdf")

Commands

vstash add <file/dir/url>   Add documents to memory
vstash remember "<text>"    Ingest text directly (no file needed)
vstash ask "<question>"     Answer a question from your documents
vstash search "<query>"     Semantic search without LLM (free, local)
vstash chat                 Interactive Q&A session
vstash list                 Show all documents in memory
vstash stats                Memory statistics (docs, chunks, DB size)
vstash forget <file>        Remove a document from memory
vstash reindex              Re-embed all chunks with a new model
vstash watch <dir>          Auto-ingest on file changes
vstash export               Export chunks as JSONL for training data curation
vstash config               Show current configuration
vstash profile <cmd>        Manage named profiles (create, list, delete, active)
vstash journal <cmd>        Cross-session memory (save, recall, log, prune)
vstash-mcp                  Start MCP server (for Claude Desktop integration)

Filtering with metadata

vstash add notes.md --collection research --project ml-survey --tags "attention,transformers"
vstash list --project ml-survey
vstash ask "what architectures were compared?" --project ml-survey
vstash export --project ml-survey --format jsonl

Documents with YAML frontmatter are parsed automatically:

---
project: ml-survey
layer: literature-review
tags: [attention, transformers]
---

# My Research Notes
...

Configuration

vstash looks for vstash.toml in your current directory, then ~/.vstash/vstash.toml, then falls back to sensible defaults. Run vstash config to see your active settings.

See the Configuration Reference for all options.


Privacy

Component Data leaves machine?
Embeddings (FastEmbed) Never — fully local ONNX
Vector store (sqlite-vec) Never — local .db file (+ .snpv sidecar if snapvec enabled)
Semantic search Never — local embeddings + SQLite
Inference (Cerebras/OpenAI) Yes — query + retrieved chunks sent to API
Inference (Ollama) Never — fully local

Search is always private. For fully private answers, use a local LLM (default) or skip inference entirely with vstash search.


Supported File Types

PDF, DOCX, PPTX, XLSX, Markdown, TXT, HTML, CSV — and any URL.

Code files (25+ languages with tree-sitter): Python, JavaScript, TypeScript, Go, Rust, Java, C, C++, Ruby, PHP, Swift, Kotlin, Scala, Lua, R, C#, Bash, Zig, Elixir, Erlang, Haskell, OCaml, Dart, Vue, Svelte.


Experiments

Experiment Corpus Key Result Command
BEIR Benchmark 5 BEIR datasets, up to 57K docs Beats BM25 5/5, ColBERTv2 4/5; NDCG@10=0.726 on SciFact python -m experiments.beir_benchmark
ArXiv Retrieval 1,000 ML papers, 3 models P@5=0.703, MRR=0.895 python -m experiments.arxiv_retrieval_bench
Dataset Discovery 954 HuggingFace datasets 91.4% discovery rate python -m experiments.dataset_discovery
Answer Relevance SciFact, NFCorpus +8.3% answer quality vs Chroma (LLM judge) python -m experiments.answer_relevance

The dataset discovery engine also has an interactive mode — describe what you need, get the right dataset:

python -m experiments.dataset_discovery --interactive
> time series forecasting for retail sales
1. walmart-sales-dataset (time-series-forecasting)  0.87

Run all experiments: python -m experiments.run_all


Documentation

Guide Description
Configuration Full TOML reference — all sections and options
How It Works Ingestion pipeline, search pipeline, chunking strategies, RRF
Memory Scoring Frequency + decay re-ranking — formula, tuning, disabling
MCP Server MCP integration — 16 tools for any MCP-compatible client
Agent Integration Claude Code, Claude Desktop, and other LLM agents
LangChain VstashRetriever for chains and agents
Embedding Models Model comparison and backend selection
Experiments Retrieval benchmarks — hypotheses, results, conclusions

Roadmap

  • Phase 1 ✅: Core — ingest, embed, hybrid search, answer
  • Phase 2 ✅: Usability — MCP server, collections, watch mode, metadata, export
  • Phase 3 ✅: Python SDK — from vstash import Memory
  • Phase 4 ✅: LangChain integration — VstashRetriever
  • Phase 5 ✅: Memory scoring — frequency + temporal decay re-ranking
  • Phase 6 ✅: Retrieval quality — distance-based relevance signal, document dedup, context expansion
  • Phase 7 ✅: Multilingual — cross-lingual embeddings, vstash reindex, MMR dedup
  • Phase 8 ✅: Hybrid code splitting — tree-sitter + parso + regex, 25+ languages
  • Phase 9 ✅: Multi-profile — isolated databases, federated search, profile management
  • Phase 10 ✅: Cross-session journal — save, recall, log, prune for agent memory
  • Phase 11 ✅: Direct chunk API — get_chunk/get_chunks for O(1) retrieval by ID

Easter Egg

In a 2018 Cornell paper "Local Homology of Word Embeddings", researchers used the variable v_stash (p. 11) to refer to the "vector of the word stash" — making this the first documented use of the exact term in the context of AI/embeddings.


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vstash-0.17.4.tar.gz (193.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vstash-0.17.4-py3-none-any.whl (93.6 kB view details)

Uploaded Python 3

File details

Details for the file vstash-0.17.4.tar.gz.

File metadata

  • Download URL: vstash-0.17.4.tar.gz
  • Upload date:
  • Size: 193.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for vstash-0.17.4.tar.gz
Algorithm Hash digest
SHA256 36b3045d0f55b74387763101d6b501b8cf6da99f9c322ccdb75bf3ad8a4f0eb3
MD5 e8626280476cee5d837863249c5c18ce
BLAKE2b-256 1115430de3d342712765a469e1bd60665bfdc05414c42ae2b00b02d9501e502d

See more details on using hashes here.

File details

Details for the file vstash-0.17.4-py3-none-any.whl.

File metadata

  • Download URL: vstash-0.17.4-py3-none-any.whl
  • Upload date:
  • Size: 93.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for vstash-0.17.4-py3-none-any.whl
Algorithm Hash digest
SHA256 5006aed5823d5c1091d8b435960d4fb3b3ea3c072f22c37afeede205d750ca36
MD5 00ace0eeae724fb44b6e0513ba73a947
BLAKE2b-256 68619e612a05795145e9d8cf9eedb78bc1d769dc1a330661e00aadb090c2fef2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page