Skip to main content

Local document memory with instant semantic search. Drop any file. Ask anything. Get an answer in under a second.

Project description

vstash

Local document memory with instant semantic search.

vstash demo

Drop any file. Ask anything. Get an answer fast.

pip install vstash
vstash add paper.pdf notes.md https://example.com/article
vstash search "what's the main argument about X?"

Why vstash?

Most RAG tools are slow, cloud-dependent, or require a running server. vstash is none of those things.

Layer Technology Why
Embeddings FastEmbed (ONNX Runtime) ~700 chunks/s, fully local, no server
Vector store sqlite-vec Single .db file, cosine similarity, zero deps
Keyword search FTS5 (SQLite) Exact matches, porter stemming, built into SQLite
Hybrid ranking Reciprocal Rank Fusion Best of both: semantic + keyword, no training needed
Inference Cerebras / Ollama / OpenAI ~2,000 tok/s via Cerebras, or 100% local via Ollama
Parsing markitdown PDF, DOCX, PPTX, XLSX, HTML, Markdown, URLs

Zero cloud required for search. Inference is optional.


Install

pip install vstash

Or from source:

git clone https://github.com/stffns/vstash
cd vstash
pip install -e .

Quick Start

Search (free, no API key needed)

Semantic search works 100% locally — no inference backend required:

vstash add report.pdf
vstash add ~/docs/notes.md
vstash add https://arxiv.org/abs/2310.06825
vstash search "what is the proposed method?"

Ask (requires an LLM backend)

To get natural language answers, configure an inference backend:

# Option A: Fully local with Ollama (free, private)
ollama pull llama3.2

# Option B: Fast with Cerebras (free tier available)
export CEREBRAS_API_KEY=your_key_here

# Option C: OpenAI or any compatible API
export OPENAI_API_KEY=your_key_here

Then:

vstash ask "summarize the key findings"
vstash chat   # interactive Q&A session

Python SDK

Use vstash as a building block in your own agents and pipelines:

from vstash import Memory

mem = Memory(project="my_agent")
mem.add("docs/spec.pdf")

# Semantic search — free, no LLM
chunks = mem.search("deployment strategy", top_k=5)
for c in chunks:
    print(c.text, c.score)

# Search + LLM answer
answer = mem.ask("What are the system requirements?")

# Management
mem.list()                # → list[DocumentInfo]
mem.stats()               # → StoreStats
mem.remove("docs/old.pdf")

Commands

vstash add <file/dir/url>   Add documents to memory
vstash ask "<question>"     Answer a question from your documents
vstash search "<query>"     Semantic search without LLM (free, local)
vstash chat                 Interactive Q&A session
vstash list                 Show all documents in memory
vstash stats                Memory statistics (docs, chunks, DB size)
vstash forget <file>        Remove a document from memory
vstash watch <dir>          Auto-ingest on file changes
vstash export               Export chunks as JSONL for training data curation
vstash config               Show current configuration
vstash-mcp                  Start MCP server (for Claude Desktop integration)

Filtering with metadata

vstash add notes.md --collection research --project ml-survey --tags "attention,transformers"
vstash list --project ml-survey
vstash ask "what architectures were compared?" --project ml-survey
vstash export --project ml-survey --format jsonl

Documents with YAML frontmatter are parsed automatically:

---
project: ml-survey
layer: literature-review
tags: [attention, transformers]
---

# My Research Notes
...

Configuration

vstash looks for vstash.toml in your current directory, then ~/.vstash/vstash.toml, then falls back to sensible defaults. Run vstash config to see your active settings.

See the Configuration Reference for all options.


Privacy

Component Data leaves machine?
Embeddings (FastEmbed) Never — fully local ONNX
Vector store (sqlite-vec) Never — local .db file
Semantic search Never — local embeddings + SQLite
Inference (Cerebras/OpenAI) Yes — query + retrieved chunks sent to API
Inference (Ollama) Never — fully local

For full privacy, use backend = "ollama" or skip inference entirely and use vstash search instead of vstash ask.


Supported File Types

PDF, DOCX, PPTX, XLSX, Markdown, TXT, HTML, CSV, Python, JavaScript, TypeScript, Go, Rust, Java — and any URL.


Documentation

Guide Description
Configuration Full TOML reference — all sections and options
How It Works Ingestion pipeline, search pipeline, chunking strategies, RRF
Memory Scoring Frequency + decay re-ranking — formula, tuning, disabling
MCP Server Claude Desktop integration setup
LangChain VstashRetriever for chains and agents
Embedding Models Model comparison and backend selection

Roadmap

  • Phase 1 ✅: Core — ingest, embed, hybrid search, answer
  • Phase 2 ✅: Usability — MCP server, collections, watch mode, metadata, export
  • Phase 3 ✅: Python SDK — from vstash import Memory
  • Phase 4 ✅: LangChain integration — VstashRetriever
  • Phase 5 ✅: Memory scoring — frequency + temporal decay re-ranking
  • Phase 6: Sync — cr-sqlite CRDT peer-to-peer sync, multiple profiles

Easter Egg

In a 2018 Cornell paper "Local Homology of Word Embeddings", researchers used the variable v_stash (p. 11) to refer to the "vector of the word stash" — making this the first documented use of the exact term in the context of AI/embeddings.


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vstash-0.5.2.tar.gz (73.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vstash-0.5.2-py3-none-any.whl (52.0 kB view details)

Uploaded Python 3

File details

Details for the file vstash-0.5.2.tar.gz.

File metadata

  • Download URL: vstash-0.5.2.tar.gz
  • Upload date:
  • Size: 73.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.1

File hashes

Hashes for vstash-0.5.2.tar.gz
Algorithm Hash digest
SHA256 2941009cc4ac69169598f41b2869570ed237f82b001d31719f22a553119ae93d
MD5 1817adcf6ad3bb22a2260e1237360287
BLAKE2b-256 39a7dc9790125ea6c2eab315ded6efb5787c92f4c366270d3167fc92e0ff8033

See more details on using hashes here.

File details

Details for the file vstash-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: vstash-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 52.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.1

File hashes

Hashes for vstash-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4caee6016d80a51d0ab9d4f7f2cb0acdab799dfd4098e65915e346e9d29d5f71
MD5 81480ff9b1559b5e00c82ca29d7c863d
BLAKE2b-256 563a1607ba5438cbfe370988f331ccfbd6da21aa5dfe1d2ab5202af30c804956

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page