Sutra-RAG: The Contextual Retriever library bridging raw data with LLM prompts
Project description
Sutra-RAG (The Contextual Retriever)
For Indonesian instructions, see README_id.md.
Sutra-RAG is a Retrieval-Augmented Generation (RAG) library designed to bridge raw data with Large Language Model (LLM) prompts. Based on recent scientific journal insights into context orchestration and hallucinations in LLMs, Sutra serves as the "factual grounding" layer, ensuring generative AI remains tied to strict contextual truths.
Key Features
- Vector-Graph Hybrid Search Ready: Built on the foundations of advanced contextual retrieval research, Sutra is designed to easily integrate dense vector similarity search (
FAISS,pgvector) alongside structured knowledge graph analysis (e.g., using Mandala-GNN). - Encoder-Agnostic Interface: Supports swapping of embedding models (SentenceTransformer, BGE, OpenAI) seamlessly through a unified interface.
- Semantic Reranker (Small ML): Implements lightweight cross-encoders to semantically reorder initially retrieved $k$-documents, ensuring cognitive alignment with user profiles as suggested in learning-outcome scientific literature.
- Dynamic Chunking: Intelligently fragments raw literature natively by tracking syntactical boundaries (like sentences), avoiding mid-sentence data starvation that often triggers LLM hallucinations.
Installation
pip install sutra-rag
Quick Start
Check the examples/ directory for full usage.
from sutra import SutraRAG
from sutra.encoder import SentenceTransformerEncoder
from sutra.retriever import FaissRetriever
from sutra.reranker import CrossEncoderReranker
# 1. Initialize Components
encoder = SentenceTransformerEncoder(model_name="all-MiniLM-L6-v2")
retriever = FaissRetriever(encoder=encoder)
reranker = CrossEncoderReranker(model_name="cross-encoder/ms-marco-MiniLM-L-6-v2")
# 2. Setup Pipeline
pipeline = SutraRAG(encoder=encoder, retriever=retriever, reranker=reranker)
# 3. Ingest Data
pipeline.add_document("Sutra-RAG bridges textual facts with semantic truth.")
# 4. Hybrid Context Extraction
results = pipeline.retrieve("What does Sutra-RAG do?", top_k=1)
print(results)
Scientific & Mathematical Foundations
Sutra-RAG operates on a scientifically-backed architecture designed to mathematically constrain LLM hallucination in educational domains. The retrieval performance is improved through a tripartite function mechanism: Dense Retrieval, Structural Graph Filtering, and Semantic Reranking.
1. Initial Dense Retrieval (Bi-Encoder)
The initial retrieval phase captures semantic similarity using a vector space model. Given a query $q$ and a document candidate $d$, they are encoded into $n$-dimensional embeddings $\mathbf{e}_q, \mathbf{e}d \in \mathbb{R}^n$. The similarity is formulated using Cosine Similarity or $L{2}$-Distance:
$$ \text{sim}(q, d) = \frac{\mathbf{e}_q \cdot \mathbf{e}_d}{ |\mathbf{e}_q| |\mathbf{e}_d| } $$
This ensures that the top-$K$ candidates are semantically aligned but does not guarantee contextual correctness in an educational flow.
2. Contextual Graph Validation (Mandala-GNN)
To prevent "conceptual leapfrogging", Sutra interfaces with Mandala-GNN. The relevance of document node $v_d$ within the knowledge graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$ is calculated computationally. Utilizing continuous Graph Neural Networks (GCN) or centrality metrics (e.g., PageRank), a structural score $S_{graph}(d)$ is derived:
$$ S_{graph}(d) = (1 - \lambda) + \lambda \sum_{u \in \mathcal{N}(v_d)} \frac{S_{graph}(u)}{\deg(u)} $$
Where $\mathcal{N}(v_d)$ represents the prerequisite nodes connected to the target document.
3. Cross-Encoder Reranking
While the bi-encoder enables fast indexing, it lacks deep interaction. We employ a highly articulate Cross-Encoder (Small ML) function $f_{CE}$ that takes the concatenated $[q ; d]$ input. The final mathematical performance orchestrator blends these three aspects into an ultimate Top-$K$ ranking:
$$ \text{FinalScore}(q, d) = \alpha \cdot f_{CE}(q, d) + \beta \cdot \text{sim}(q, d) + \gamma \cdot S_{graph}(d) $$
(where $\alpha, \beta, \gamma$ are tunable coefficients, often favoring $\alpha$ heavily).
By constraining the context retrieved to only the intersection of Semantic Proximity and Graph Relevancy, the LLM's generative prompt mathematically converges to precision, dropping hallucination rates significantly compared to standard naïve RAG approaches.
Scientific References & Theoretical Framework
| Approach / Algorithm | Component in Sutra-RAG | Theoretical Goal / Journal Reference |
|---|---|---|
| Dense Passage Retrieval (DPR) | FaissRetriever, PgVectorRetriever |
Karpukhin et al. (2020) - Dense Passage Retrieval for Open-Domain Question Answering. Proves that dense embedding spaces natively outperform sparse statistical rankers (TF-IDF/BM25) in semantic understanding. |
| Graph Convolution & Centrality | Mandala-GNN Integration |
Kipf & Welling (2017) - Semi-Supervised Classification with Graph Convolutional Networks. Justifies the use of multi-hop prerequisite validation and Laplacian matrices to prevent invalid curriculum leaps. |
| Cross-Encoder Rescoring | CrossEncoderReranker |
Nogueira & Cho (2019) - Passage Re-ranking with BERT. Establishes that full self-attention across [Query ; Document] concatenated inputs drastically improves top-K relevance over Bi-Encoders. |
| Syntactic Boundary Allocation | DynamicChunker |
Gao et al. (2023) - Retrieval-Augmented Generation for Large Language Models: A Survey. Emphasizes that "mid-sentence starvation" causes LLM context fragmentation and directly drives generation hallucinations. |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sutra_rag-0.1.0.tar.gz.
File metadata
- Download URL: sutra_rag-0.1.0.tar.gz
- Upload date:
- Size: 15.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d39ba23c33b0de3ae0371ee859fc7eea3be05cf322a5368124e96251812131d6
|
|
| MD5 |
afab78418931f4701832203e8af7c535
|
|
| BLAKE2b-256 |
31501b43bb11573259f7cba958d3470a756f38a7aa893ba46025ea4b9e3849b6
|
File details
Details for the file sutra_rag-0.1.0-py3-none-any.whl.
File metadata
- Download URL: sutra_rag-0.1.0-py3-none-any.whl
- Upload date:
- Size: 15.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
519e991fe8ecadf76d925d8691dd33ff729cdceadd791b362706da4ee1f92e2c
|
|
| MD5 |
fb38ad51fe1a090011ea935134fe49ab
|
|
| BLAKE2b-256 |
ffe5217e92ede8542bf9e5a08af97fca2cb6bd8af1fa325d81b52122165b2cfb
|