A patent-pending, embedded, multimodal RAG database that performs automated ingestion, hybrid vector+keyword search, and offline retrieval entirely inside a portable single-file SQLite container.
Project description
RAGdb: The Portable Multimodal Knowledge Container
RAGdb is a state-of-the-art, serverless, embedded database designed for offline Retrieval-Augmented Generation (RAG).
Unlike traditional vector databases that require cloud infrastructure, Docker containers, or heavy GPU dependencies, RAGdb consolidates automated ingestion, multimodal extraction, vector storage, and hybrid retrieval into a single, portable SQLite-based container (.ragdb).
It serves as the "Long-Term Memory" for AI agents, running entirely on local hardware, edge devices, or within application binaries.
🚀 Key Capabilities
1. Unified Ingestion Pipeline
Automatically detects, parses, and normalizes content from a wide array of unstructured data sources into a structured knowledge graph.
- Documents:
.pdf,.docx,.txt,.md,.csv,.json,.xlsx - Media: Images (OCR), Audio/Video (Metadata tagging)
- Code:
.py,.js,.html,.xml
2. State-of-the-Art Hybrid Search
RAGdb implements a Hybrid Retrieval Engine that outperforms simple cosine similarity by combining:
- Dense Vector Search: TF-IDF weighted vectors for semantic relevance.
- Sparse Keyword Search: Exact substring boosting for precise entity matching (e.g., finding specific invoice numbers or names).
3. Zero-Infrastructure Architecture
- No Vector DB Server (Replaces Pinecone/Weaviate/Milvus)
- No Heavy ML Frameworks (Zero PyTorch/Transformers dependency by default)
- Single-File Portability: The entire database is a single file that can be emailed, version-controlled, or embedded.
📦 Installation
RAGdb is modular. Install only what you need.
Option A: Lightweight Core (< 30MB)
Best for text, documents, code, and structured data.
pip install ragdb
Option B: Full Multimodal SOTA (~100MB)
Includes RapidOCR (ONNX) for high-fidelity text extraction from images and scanned PDFs.
pip install "ragdb[ocr]"
⚡ Quick Start
1. Build a Knowledge Base
RAGdb uses Incremental Ingestion. It hashes files and only processes new or modified content, making it efficient for large directories.
from ragdb import RAGdb
# Initialize the container (creates 'knowledge.ragdb')
db = RAGdb("knowledge.ragdb")
# Recursively ingest a folder of mixed content (PDFs, Images, Excel)
db.ingest_folder("./my_documents")
2. Perform Hybrid Search
# Search using natural language
results = db.search("What are the Q3 financial projections?", top_k=3)
for res in results:
print(f"[{res.score:.4f}] {res.media_type.upper()} - {res.path}")
print(f"Preview: {res.content[:200]}...\n")
🤖 Integration: RAG with LLMs
RAGdb is designed to be the retrieval backend for Large Language Models (OpenAI, Claude, LlamaCPP).
import os
from openai import OpenAI
from ragdb import RAGdb
client = OpenAI(api_key="sk-...")
db = RAGdb("corporate_docs.ragdb")
def chat_with_data(user_query):
# 1. Retrieve relevant context locally (Zero Latency)
results = db.search(user_query, top_k=3)
if not results:
return "No relevant data found."
# 2. Construct Context Block
context = "\n".join([r.content for r in results])
# 3. Generate Answer via LLM
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "Answer based strictly on the context provided."},
{"role": "user", "content": f"Context:\n{context}\n\nQuestion: {user_query}"}
]
)
return response.choices[0].message.content
🛠 Technical Architecture
RAGdb operates on a novel "Ingest-Normalize-Index" pipeline:
- Detection: Magic-byte analysis determines file modality.
- Extraction:
- Text: Utf-8 normalization.
- OCR: ONNX-based runtime (if enabled) for edge-optimized image processing.
- Tables: Structure preservation for CSV/Excel.
- Vectorization: Sublinear Term-Frequency scaling with Inverse Document Frequency (IDF) weighting.
- Storage: ACID-compliant SQLite container with Write-Ahead Logging (WAL) enabled for concurrency.
🌐 API Server (Optional)
Expose your .ragdb file as a microservice using the built-in FastAPI wrapper.
pip install "ragdb[server]"
uvicorn ragdb.server:create_app --port 8000
POST /ingestGET /search?q=...
📄 License
This project is licensed under the Apache 2.0 License. It is free for commercial use, modification, and distribution.
Disclaimer: RAGdb stores extracted knowledge representations, not raw file backups. Always maintain backups of your original source files.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ragdb-1.0.3.tar.gz.
File metadata
- Download URL: ragdb-1.0.3.tar.gz
- Upload date:
- Size: 17.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1d4ae3ea7809c8dca43042932935b5650bb763139416c2ec43532918e1ed30f7
|
|
| MD5 |
861e7c7151f312c3882da5d4e2ab2a9c
|
|
| BLAKE2b-256 |
fa4888909c29bd02671a0311a1152a07be388019f06460015d3092a1f3e4af73
|
File details
Details for the file ragdb-1.0.3-py3-none-any.whl.
File metadata
- Download URL: ragdb-1.0.3-py3-none-any.whl
- Upload date:
- Size: 15.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8a22f6536299f73638eaa01fb783068c69987320059d6231c476ac35f0f76e9b
|
|
| MD5 |
2f74839ec030faf58ee1ab28445b397a
|
|
| BLAKE2b-256 |
9adc18551f2b3e6b214553aee97f998cdc9c77548a8f6b6b1acaa406ec9e35ee
|