Skip to main content

Portable Retrieval-Augmented Generation Library

Project description

RAGPack ๐Ÿ“ฆ

Portable Retrieval-Augmented Generation Library

RAGPack is a Python library for creating, saving, loading, and querying portable RAG (Retrieval-Augmented Generation) packs. It allows you to bundle documents, embeddings, vectorstores, and configuration into a single .rag file that can be easily shared and deployed across different environments.

โœจ Features

  • ๐Ÿš€ Portable RAG Packs: Bundle everything into a single .rag file
  • ๐Ÿ”„ Provider Flexibility: Support for OpenAI, Google, Groq, Cerebras, and HuggingFace
  • ๐Ÿ”’ Encryption Support: Optional AES-GCM encryption for sensitive data
  • ๐ŸŽฏ Runtime Overrides: Change embedding/LLM providers without rebuilding
  • ๐Ÿ“š Multiple Formats: Support for PDF, TXT, MD, and more
  • ๐Ÿ› ๏ธ CLI Tools: Command-line interface for easy pack management
  • ๐Ÿ”ง Lazy Loading: Efficient dependency management with lazy imports

๐Ÿš€ Quick Start

Installation

# Core installation
pip install ragpack

# With optional providers
pip install ragpack[google]     # Google Vertex AI
pip install ragpack[groq]       # Groq
pip install ragpack[cerebras]   # Cerebras
pip install ragpack[all]        # All providers

Basic Usage

from ragpack import RAGPack

# Create a pack from documents
pack = RAGPack.from_files([
    "docs/manual.pdf", 
    "notes.txt",
    "knowledge_base/"
])

# Save the pack
pack.save("my_knowledge.rag")

# Load and query
pack = RAGPack.load("my_knowledge.rag")

# Simple retrieval (no LLM)
results = pack.query("How do I install this?", top_k=3)
print(results)

# Question answering with LLM
answer = pack.ask("What are the main features?")
print(answer)

Provider Overrides

# Load with different providers
pack = RAGPack.load(
    "my_knowledge.rag",
    embedding_config={
        "provider": "google", 
        "model_name": "textembedding-gecko"
    },
    llm_config={
        "provider": "groq", 
        "model_name": "mixtral-8x7b-32768"
    }
)

answer = pack.ask("Explain the architecture")

๐Ÿ› ๏ธ Command Line Interface

Create a RAG Pack

# From files and directories
ragpack create docs/ notes.txt --output knowledge.rag

# With custom settings
ragpack create docs/ \
  --embedding-provider openai \
  --embedding-model text-embedding-3-large \
  --chunk-size 1024 \
  --encrypt-key mypassword

Query and Ask

# Simple retrieval
ragpack query knowledge.rag "How to install?"

# Question answering
ragpack ask knowledge.rag "What are the requirements?" \
  --llm-provider openai \
  --llm-model gpt-4o

# With provider overrides
ragpack ask knowledge.rag "Explain the API" \
  --embedding-provider google \
  --embedding-model textembedding-gecko \
  --llm-provider groq \
  --llm-model mixtral-8x7b-32768

Pack Information

ragpack info knowledge.rag

๐Ÿ—๏ธ Architecture

.rag File Structure

A .rag file is a structured zip archive:

mypack.rag
โ”œโ”€โ”€ metadata.json          # Pack metadata
โ”œโ”€โ”€ config.json           # Default configurations
โ”œโ”€โ”€ documents/            # Original documents
โ”‚   โ”œโ”€โ”€ doc1.txt
โ”‚   โ””โ”€โ”€ doc2.pdf
โ””โ”€โ”€ vectorstore/          # Chroma vectorstore
    โ”œโ”€โ”€ chroma.sqlite3
    โ””โ”€โ”€ ...

Supported Providers

Embedding Providers:

  • openai: text-embedding-3-small, text-embedding-3-large
  • huggingface: all-MiniLM-L6-v2, all-mpnet-base-v2 (offline)
  • google: textembedding-gecko

LLM Providers:

  • openai: gpt-4o, gpt-4o-mini, gpt-3.5-turbo
  • google: gemini-pro, gemini-1.5-flash
  • groq: mixtral-8x7b-32768, llama2-70b-4096
  • cerebras: llama3.1-8b, llama3.1-70b

๐Ÿ“– API Reference

RAGPack Class

RAGPack.from_files(files, embed_model="openai:text-embedding-3-small", **kwargs)

Create a RAG pack from files.

Parameters:

  • files: List of file paths or directories
  • embed_model: Embedding model in format "provider:model"
  • chunk_size: Text chunk size (default: 512)
  • chunk_overlap: Chunk overlap (default: 50)
  • name: Pack name

RAGPack.load(path, embedding_config=None, llm_config=None, **kwargs)

Load a RAG pack from file.

Parameters:

  • path: Path to .rag file
  • embedding_config: Override embedding configuration
  • llm_config: Override LLM configuration
  • reindex_on_mismatch: Rebuild vectorstore if dimensions mismatch
  • decrypt_key: Decryption password

pack.save(path, encrypt_key=None)

Save pack to .rag file.

pack.query(question, top_k=3)

Retrieve relevant chunks (no LLM).

pack.ask(question, top_k=4, temperature=0.0)

Ask question with LLM.

Provider Wrappers

# Direct provider access
from ragpack.embeddings import OpenAI, HuggingFace, Google
from ragpack.llms import OpenAIChat, GoogleChat, GroqChat

# Create embedding provider
embeddings = OpenAI(model_name="text-embedding-3-large")
vectors = embeddings.embed_documents(["Hello world"])

# Create LLM provider
llm = OpenAIChat(model_name="gpt-4o", temperature=0.7)
response = llm.invoke("What is AI?")

๐Ÿ”ง Configuration

Environment Variables

# API Keys
export OPENAI_API_KEY="your-key"
export GOOGLE_CLOUD_PROJECT="your-project"
export GROQ_API_KEY="your-key"
export CEREBRAS_API_KEY="your-key"

# Optional
export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"

Configuration Files

# Custom embedding config
embedding_config = {
    "provider": "huggingface",
    "model_name": "all-mpnet-base-v2",
    "device": "cuda"  # Use GPU
}

# Custom LLM config
llm_config = {
    "provider": "openai",
    "model_name": "gpt-4o",
    "temperature": 0.7,
    "max_tokens": 2000
}

๐Ÿ”’ Security

Encryption

RAGPack supports AES-GCM encryption for sensitive data:

# Save with encryption
pack.save("sensitive.rag", encrypt_key="strong-password")

# Load encrypted pack
pack = RAGPack.load("sensitive.rag", decrypt_key="strong-password")

Best Practices

  • Use strong passwords for encryption
  • Store API keys securely in environment variables
  • Validate .rag files before loading in production
  • Consider network security when sharing packs

๐Ÿงช Examples

See the examples/ directory for complete examples:

  • basic_usage.py - Simple pack creation and querying
  • provider_overrides.py - Using different providers
  • encryption_example.py - Working with encrypted packs
  • cli_examples.sh - Command-line usage examples

๐Ÿค Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ†˜ Support

๐Ÿ™ Acknowledgments

Built with:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ragpackai-0.1.0.tar.gz (41.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ragpackai-0.1.0-py3-none-any.whl (33.8 kB view details)

Uploaded Python 3

File details

Details for the file ragpackai-0.1.0.tar.gz.

File metadata

  • Download URL: ragpackai-0.1.0.tar.gz
  • Upload date:
  • Size: 41.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for ragpackai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9745fd8cbe8db0b7e2d8b71203d44c21b7a1ab9d7503523254f2872937c9805f
MD5 3005b04cf0a6afc3717cf1ef31310ec5
BLAKE2b-256 f0e2b8ecadbb4fc6f56f6fd1a192206fed428cae15bc4414b4a9800c73fb13d2

See more details on using hashes here.

File details

Details for the file ragpackai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: ragpackai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 33.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for ragpackai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 647cd59b1b58adc48c62e873e79113c5beaf896481c2fefe6e3a767e44e4448d
MD5 4e6ca2fc218a81304b74c0167cec1699
BLAKE2b-256 268c0e99a1c38048650da271b185cc8f35af93272bd775b527bfa4bc431b82a5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page