Skip to main content

A reusable RAG core library built on FAISS and Ollama

Project description

🧠 pyragcore

A reusable, modular RAG (Retrieval-Augmented Generation) core library built on FAISS and Ollama. Use it as the foundation for any AI project that needs document ingestion, semantic search, and LLM-powered responses.


PyPI Downloads Python License


Features

  • 🗂️ FAISS vector store with persistence, deduplication, and metadata filtering
  • 🔢 SentenceTransformer embeddings with GPU support
  • 🔍 Semantic retrieval with MMR search and metadata filtering
  • 🤖 Ollama LLM integration for local, private inference
  • 🎙️ Voice input/output support
  • 🧱 Abstract base classes for building custom pipelines
  • 📦 Modular optional dependencies — install only what you need

Requirements

  • Python 3.13+
  • Ollama installed and running (for LLM features)
  • NVIDIA GPU with CUDA 12.8+ (optional, falls back to CPU)

Installation

pip install pyragcore          # core only (FAISS + tqdm + langchain-text-splitters)
pip install pyragcore[embeddings]  # + SentenceTransformers
pip install pyragcore[ollama]      # + Ollama LLM
pip install pyragcore[voice]       # + speech input/output
pip install pyragcore[all]         # everything

Quick Start

from pyragcore.pipeline.base_pipeline import BasePipeline
from pyragcore.embeddings.embedder import Embedder
from pyragcore.retrieval.vector_store import VectorStore
from pyragcore.retrieval.retriver import Retriever
from pyragcore.llm.responder import Responder

# Extend BasePipeline for your use case
class MyPipeline(BasePipeline):
    def ingest(self, source: str) -> str:
        # implement your ingestion logic
        ...

pipeline = MyPipeline(persist_dir="./memory", output_folder="./output")
source_id = pipeline.ingest("./my_document.pdf")
answer = pipeline.ask("What is this document about?", source_id=source_id)
print(answer)

Architecture

pyragcore/
├── CHANGELOG.md
├── LICENSE
├── pyproject.toml
├── py.typed
├── README.md
└── pyragcore
    ├── embeddings
    │   └── embedder.py
    ├── exceptions.py
    ├── ingestion
    │   └── chunker.py
    ├── interfaces
    │   ├── base_chunker.py
    │   ├── base_embedder.py
    │   ├── base_llm.py
    │   ├── base_loader.py
    │   ├── base_retriever.py
    │   └── base_vector_store.py
    ├── llm
    │   ├── prompt.py
    │   └── responder.py
    ├── pipeline
    │   └── base_pipeline.py
    ├── retrieval
    │   ├── retriver.py
    │   └── vector_store.py
    └── utils_io
        ├── choose_model.py
        ├── logger.py
        └── voice.py


Building a Custom Pipeline

Extend BasePipeline and implement ingest():

from pyragcore.pipeline.base_pipeline import BasePipeline
from interfaces.base_loader import BaseLoader
from pyragcore.ingestion.chunker import Chunker
from tqdm import tqdm


class MyLoader(BaseLoader):
    def read(self, path) -> dict:
        # read your source and return
        return {
            "text": "...",
            "metadatas": {
                "file_id": "unique_id",
                "file_name": "my_file.txt",
                "source": path,
            }
        }


class MyPipeline(BasePipeline):
    def __init__(self, persist_dir: str, output_folder: str, model_name: str = "llama3.2"):
        super().__init__(persist_dir, output_folder, model_name)
        self.chunker = Chunker()

    def ingest(self, source: str) -> str:
        loader = MyLoader()
        content = loader.read(source)
        text = content.get("text", "")
        metadata = content.get("metadatas", {})
        source_id = metadata.get("file_id", "")

        if self._is_ingested(source_id):
            print("Already ingested, skipping...")
            return source_id

        chunks = self.chunker.chunk(text, metadata)
        documents, metadatas, ids = [], [], []

        for i, item in enumerate(chunks):
            documents.append(item["chunk"])
            metadatas.append(item["metadatas"])
            ids.append(f"{source_id}_chunk_{i}")

        BATCH_SIZE = 64
        all_embeddings = []
        for start in tqdm(range(0, len(documents), BATCH_SIZE), desc="Embedding"):
            batch = documents[start:start + BATCH_SIZE]
            all_embeddings.extend(self.embedder.embed(batch))

        self.vector_store.add(
            embeddings=all_embeddings,
            documents=documents,
            metadata=metadatas,
            ids=ids
        )
        return source_id

VectorStore

from pyragcore.retrieval.vector_store import VectorStore

store = VectorStore(dim=768, persist_path="./memory", autosave=True)

# add documents
store.add(embeddings=[[...]], documents=["text"], metadata=[{"file_id": "abc"}], ids=["id_0"])

# search
results = store.search(query_embedding=[...], k=5)

# search with filter
results = store.search_with_filter(query_embedding=[...], k=5, where={"file_id": "abc"})

# MMR search for diversity
results = store.mmr_search(query_embedding=[...], k=5, lamda_param=0.5)

# list ingested files
files = store.list_files()

Embedder

from pyragcore.embeddings.embedder import Embedder

embedder = Embedder(model_name="all-mpnet-base-v2")

# embed multiple texts
embeddings = embedder.embed(["text one", "text two"])

# embed a single query
embedding = embedder.embed_one("what is a database?")

PyTorch with CUDA

pyragcore does not pin a specific PyTorch version to stay flexible. Install the version that matches your system:

# CUDA 12.8
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128

# CPU only
pip install torch torchvision

Exceptions

from pyragcore.exceptions import (
    BotRagException,        # base exception
    EmbeddingException,     # embedding failed
    RetrievalException,     # retrieval failed
    VectorStoreException,   # vector store error
    ModelNotFoundException, # ollama model not found
)

Custom Backends (v0.2.0+)

You can now swap any component with your own implementation:

Custom Embedder

from pyragcore import BaseEmbedder

class MyEmbedder(BaseEmbedder):
    def embed(self, texts: list[str]) -> list[list[float]]:
        # your implementation
        ...
    
    def embed_one(self, text: str) -> list[float]:
        ...
    
    def get_dimension(self) -> int:
        return 768
if __name__=="__main__":
    rag = RagPipeline("memory", "output", embedder=MyEmbedder())

Custom Vector Store

from pyragcore import BaseVectorStore

class MyVectorStore(BaseVectorStore):
    def add(self, embeddings, documents, metadata, ids):
        ...
    
    def search(self, query_embedding, k=5):
        ...
if __name__ =="__main__":
    rag = RagPipeline("memory", "output", vector_store=MyVectorStore())

Projects Built with pyragcore

  • StudyBot — Chat with your documents and YouTube videos
  • Coder-Assistant — AI assistant for your codebase (WIP) (Soon)

Contributing

  1. Fork the repo
  2. Create a feature branch (git checkout -b feature-name)
  3. Commit your changes (git commit -m "Add feature")
  4. Push to the branch (git push origin feature-name)
  5. Open a Pull Request

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyragcore-0.2.0.tar.gz (19.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyragcore-0.2.0-py3-none-any.whl (21.8 kB view details)

Uploaded Python 3

File details

Details for the file pyragcore-0.2.0.tar.gz.

File metadata

  • Download URL: pyragcore-0.2.0.tar.gz
  • Upload date:
  • Size: 19.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pyragcore-0.2.0.tar.gz
Algorithm Hash digest
SHA256 c914c28cc21e8b0532004c465d39869ce337f3654cdde96dd6bea295d26ac119
MD5 6e87fa2890923ff7f0566694474b0341
BLAKE2b-256 de265db9b7fb9760bdaa209e7cc813c85755bd211c1794d51bb0f49071e7d4a3

See more details on using hashes here.

File details

Details for the file pyragcore-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: pyragcore-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 21.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for pyragcore-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 11f19d5e2866435e32650f0b18a143bd5aaa238f17d050c6afb789790e5694a5
MD5 8b42f80d7564891de623b4bcb818db33
BLAKE2b-256 1cb570da0606ca0f458d26c4a7d4bb89f05500a0d2a2aa0f766cfc5f363895f2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page