Local private RAG pipeline — LangChain + Ollama + ChromaDB + NFS support
Project description
ZettaBrain RAG
Local private RAG pipeline — your documents, your hardware, zero cloud.
Install
pip install zettabrain-rag
Requires Python 3.10+ and Ollama.
One-time setup
1. Install Ollama and pull models
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.1:8b
ollama pull nomic-embed-text
2. Mount NFS share and build vector store
sudo zettabrain-setup
Prompts for NFS server IP and export path, mounts at /mnt/Rag-data, then builds the vector store automatically.
3. Start chatting
zettabrain-chat
Commands
| Command | What it does |
|---|---|
sudo zettabrain-setup |
NFS mount wizard + auto vector store build |
zettabrain-chat |
Start interactive RAG chat |
zettabrain-chat --rebuild |
Rebuild vector store then chat |
zettabrain-chat --debug |
Show retrieved chunks on every query |
zettabrain-ingest |
Ingest documents without starting chat |
zettabrain-ingest --file /path/to/file.pdf |
Ingest a single file |
zettabrain-ingest --stats |
Show what's in the vector store |
zettabrain-ingest --clear |
Wipe the vector store |
zettabrain-status |
Show install paths and store statistics |
Directory structure after install
/zettabrain/
├── nfs_setup.sh ← NFS mount wizard (run via sudo zettabrain-setup)
└── src/
├── 03_langchain_rag.py ← main RAG pipeline
├── 05_ingest_documents.py← ingestion utility
├── 01_chromadb_setup.py ← diagnostic: verify ChromaDB
├── 02_embeddings_test.py ← diagnostic: verify embeddings
└── zettabrain_vectorstore/ ← auto-generated, do not commit
Configuration via environment variables
export ZETTABRAIN_DOCS=/mnt/Rag-data # documents folder (NFS mount)
export ZETTABRAIN_CHROMA=./zettabrain_vectorstore
export ZETTABRAIN_LLM_MODEL=llama3.1:8b
export ZETTABRAIN_EMBED_MODEL=nomic-embed-text
export ZETTABRAIN_CHUNK_SIZE=1500
export OLLAMA_HOST=http://localhost:11434
Supported document formats
.pdf .txt .md .docx
Hardware guide
| RAM | Model | Notes |
|---|---|---|
| 8GB | llama3.2:3b | Basic |
| 16GB | llama3.1:8b | Recommended |
| 32GB | mistral-nemo:12b | Better reasoning |
| Apple M3/M4 | llama3.1:70b-q4 | Excellent |
License
MIT — © ZettaBrain
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file zettabrain_rag-0.1.1.tar.gz.
File metadata
- Download URL: zettabrain_rag-0.1.1.tar.gz
- Upload date:
- Size: 5.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
84c407f7555201edde6e5db0acd3bf6a0c997eadeb46b40cc747d5cf8a2412ce
|
|
| MD5 |
671c65a703ee8748f64ccf967926a8a2
|
|
| BLAKE2b-256 |
9d1bd00f5210975d0dc6e514b763bd83765a72684b385b1846987e59683301fe
|
File details
Details for the file zettabrain_rag-0.1.1-py3-none-any.whl.
File metadata
- Download URL: zettabrain_rag-0.1.1-py3-none-any.whl
- Upload date:
- Size: 5.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2be6f96aa56352a4aebd3c9793d1b44caf18a42fa5c5f136a903aaed345d7862
|
|
| MD5 |
eeaaa51d52919cd77f0b9f75b4057c48
|
|
| BLAKE2b-256 |
06cbcf398f52d9dbbe0b2394a16ba6c49cb5e004aed0bf3ea0e54840b6d588ac
|