Skip to main content

A RAG (Retrieval-Augmented Generation) toolkit with Milvus integration

Project description

RagXO 🚀

PyPI version License: MIT Python 3.8+

RagXO extends the capabilities of traditional RAG (Retrieval-Augmented Generation) systems by providing a unified way to package, version, and deploy your entire RAG pipeline with LLM integration. Export your complete system—including embedding functions, preprocessing steps, vector store, and LLM configurations—into a single, portable artifact.

Features ✨

  • Complete RAG Pipeline: Package your entire RAG system into a versioned artifact
  • LLM Integration: Built-in support for OpenAI models
  • Flexible Embedding: Compatible with any embedding function (Sentence Transformers, OpenAI, etc.)
  • Custom Preprocessing: Chain multiple preprocessing steps
  • Vector Store Integration: Built-in Milvus support
  • System Prompts: Include and version your system prompts

Installation 🛠️

pip install ragxo

Quick Start 🚀

from ragxo import Ragxo, Document
from openai import OpenAI
client = OpenAI()

def get_openai_embeddings(text: str) -> list[float]:
    response = client.embeddings.create(
        input=text,
        model="text-embedding-ada-002"
    )
    return response.data[0].embedding

def preprocess_text(text: str) -> str:
    return text.lower()

# Initialize and configure RagXO
ragxo = Ragxo(dimension=384)
ragxo.add_preprocess(preprocess_text)
ragxo.add_embedding_fn(get_openai_embeddings)

# Add system prompt and model
ragxo.add_system_prompt("You are a helpful assistant.")
ragxo.add_model("gpt-4o-mini")

# Create and index documents
documents = [
    Document(
        text="Sample document for indexing",
        metadata={"source": "example"},
        id=1
    )
]
ragxo.index(documents)

# Export the pipeline
ragxo.export("my_rag_v1")

# Load and use elsewhere
loaded_ragxo = Ragxo.load("my_rag_v1")

# Query and generate response
similar_docs = loaded_ragxo.query("sample query")
llm_response = loaded_ragxo.generate_llm_response("What can you tell me about the sample?")

Usage Guide 📚

Creating Documents

from ragxo import Document

doc = Document(
    text="Your document content here",
    metadata={"source": "wiki", "category": "science"},
    id=1
)

Adding Preprocessing Steps

import re

def remove_special_chars(text: str) -> str:
    return re.sub(r'[^a-zA-Z0-9\s]', '', text)

def lowercase(text: str) -> str:
    return text.lower()

ragxo.add_preprocess(remove_special_chars)
ragxo.add_preprocess(lowercase)

Custom Embedding Functions

# Using SentenceTransformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')

def get_embeddings(text: str) -> list[float]:
    return model.encode(text).tolist()

ragxo.add_embedding_fn(get_embeddings)

# Or using OpenAI
from openai import OpenAI
client = OpenAI()

def get_openai_embeddings(text: str) -> list[float]:
    response = client.embeddings.create(
        input=text,
        model="text-embedding-ada-002"
    )
    return response.data[0].embedding

ragxo.add_embedding_fn(get_openai_embeddings)

LLM Configuration

# Set system prompt
ragxo.add_system_prompt("""
You are a helpful assistant. Use the provided context to answer questions accurately.
If you're unsure about something, please say so.
""")

# Set LLM model
ragxo.add_model("gpt-4")

Export and Load

# Export your RAG pipeline
ragxo.export("rag_pipeline_v1")

# Load it elsewhere
loaded_ragxo = Ragxo.load("rag_pipeline_v1")

Best Practices 💡

  1. Version Your Exports: Use semantic versioning for your exports:
ragxo.export("my_rag_v1.0.0")
  1. Validate After Loading: Always test your loaded pipeline:
loaded_ragxo = Ragxo.load("my_rag")
try:
    # Test similarity search
    similar_docs = loaded_ragxo.query("test query")
    # Test LLM generation
    llm_response = loaded_ragxo.generate_llm_response("test question")
    print("Pipeline loaded successfully!")
except Exception as e:
    print(f"Error loading pipeline: {e}")
  1. Document Your Pipeline Configuration: Keep track of your setup:
pipeline_config = {
    "preprocessing_steps": ["remove_special_chars", "lowercase"],
    "embedding_model": "all-MiniLM-L6-v2",
    "llm_model": "gpt-4",
    "dimension": 384
}

License 📝

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing 🤝

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ragxo-0.1.3.tar.gz (4.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ragxo-0.1.3-py3-none-any.whl (4.7 kB view details)

Uploaded Python 3

File details

Details for the file ragxo-0.1.3.tar.gz.

File metadata

  • Download URL: ragxo-0.1.3.tar.gz
  • Upload date:
  • Size: 4.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.5 CPython/3.13.1 Darwin/24.1.0

File hashes

Hashes for ragxo-0.1.3.tar.gz
Algorithm Hash digest
SHA256 e54c1286e87e157482d3a99ab62ac8c794e3df2ed0060cc95a3ab8ae3e48f6f9
MD5 9569a6f2c74b135a032249f4b078de6b
BLAKE2b-256 8a0199c0b931b9b7063c680966ded92490604cd501b57afd756da5915a7b2764

See more details on using hashes here.

File details

Details for the file ragxo-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: ragxo-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 4.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.5 CPython/3.13.1 Darwin/24.1.0

File hashes

Hashes for ragxo-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 696444837b9e8e84d52fa0bd4d586052df8c4c99c26905f5209c881084c5f073
MD5 748fd4c6fdf2001b5431901fad82823a
BLAKE2b-256 1c89f87079a41339925fda9cc4a1bcf90d5ac3860b84ef7b9b72888131c5ba60

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page