Skip to main content

Portable Retrieval-Augmented Generation Library

Project description

ragpackai ๐Ÿ“ฆ

Portable Retrieval-Augmented Generation Library

ragpackai is a Python library for creating, saving, loading, and querying portable RAG (Retrieval-Augmented Generation) packs. It allows you to bundle documents, embeddings, vectorstores, and configuration into a single .rag file that can be easily shared and deployed across different environments.

โœจ Features

  • ๐Ÿš€ Portable RAG Packs: Bundle everything into a single .rag file
  • ๐Ÿ”„ Provider Flexibility: Support for OpenAI, Google, Groq, Cerebras, and HuggingFace
  • ๐Ÿ”’ Encryption Support: Optional AES-GCM encryption for sensitive data
  • ๐ŸŽฏ Runtime Overrides: Change embedding/LLM providers without rebuilding
  • ๐Ÿ“š Multiple Formats: Support for PDF, TXT, MD, and more
  • ๐Ÿ› ๏ธ CLI Tools: Command-line interface for easy pack management
  • ๐Ÿ”ง Lazy Loading: Efficient dependency management with lazy imports

๐Ÿš€ Quick Start

Installation

# Core installation
pip install ragpackai

# With optional providers
pip install ragpackai[google]     # Google Vertex AI
pip install ragpackai[groq]       # Groq
pip install ragpackai[cerebras]   # Cerebras
pip install ragpackai[all]        # All providers

Basic Usage

from ragpackai import ragpackai

# Create a pack from documents
pack = ragpackai.from_files([
    "docs/manual.pdf", 
    "notes.txt",
    "knowledge_base/"
])

# Save the pack
pack.save("my_knowledge.rag")

# Load and query
pack = ragpackai.load("my_knowledge.rag")

# Simple retrieval (no LLM)
results = pack.query("How do I install this?", top_k=3)
print(results)

# Question answering with LLM
answer = pack.ask("What are the main features?")
print(answer)

Provider Overrides

# Load with different providers
pack = ragpackai.load(
    "my_knowledge.rag",
    embedding_config={
        "provider": "google", 
        "model_name": "textembedding-gecko"
    },
    llm_config={
        "provider": "groq", 
        "model_name": "mixtral-8x7b-32768"
    }
)

answer = pack.ask("Explain the architecture")

๐Ÿ› ๏ธ Command Line Interface

Create a RAG Pack

# From files and directories
ragpackai create docs/ notes.txt --output knowledge.rag

# With custom settings
ragpackai create docs/ \
  --embedding-provider openai \
  --embedding-model text-embedding-3-large \
  --chunk-size 1024 \
  --encrypt-key mypassword

Query and Ask

# Simple retrieval
ragpackai query knowledge.rag "How to install?"

# Question answering
ragpackai ask knowledge.rag "What are the requirements?" \
  --llm-provider openai \
  --llm-model gpt-4o

# With provider overrides
ragpackai ask knowledge.rag "Explain the API" \
  --embedding-provider google \
  --embedding-model textembedding-gecko \
  --llm-provider groq \
  --llm-model mixtral-8x7b-32768

Pack Information

ragpackai info knowledge.rag

๐Ÿ—๏ธ Architecture

.rag File Structure

A .rag file is a structured zip archive:

mypack.rag
โ”œโ”€โ”€ metadata.json          # Pack metadata
โ”œโ”€โ”€ config.json           # Default configurations
โ”œโ”€โ”€ documents/            # Original documents
โ”‚   โ”œโ”€โ”€ doc1.txt
โ”‚   โ””โ”€โ”€ doc2.pdf
โ””โ”€โ”€ vectorstore/          # Chroma vectorstore
    โ”œโ”€โ”€ chroma.sqlite3
    โ””โ”€โ”€ ...

Supported Providers

Embedding Providers:

  • openai: text-embedding-3-small, text-embedding-3-large
  • huggingface: all-MiniLM-L6-v2, all-mpnet-base-v2 (offline)
  • google: textembedding-gecko

LLM Providers:

  • openai: gpt-4o, gpt-4o-mini, gpt-3.5-turbo
  • google: gemini-pro, gemini-1.5-flash
  • groq: mixtral-8x7b-32768, llama2-70b-4096
  • cerebras: llama3.1-8b, llama3.1-70b

๐Ÿ“– API Reference

ragpackai Class

ragpackai.from_files(files, embed_model="openai:text-embedding-3-small", **kwargs)

Create a RAG pack from files.

Parameters:

  • files: List of file paths or directories
  • embed_model: Embedding model in format "provider:model"
  • chunk_size: Text chunk size (default: 512)
  • chunk_overlap: Chunk overlap (default: 50)
  • name: Pack name

ragpackai.load(path, embedding_config=None, llm_config=None, **kwargs)

Load a RAG pack from file.

Parameters:

  • path: Path to .rag file
  • embedding_config: Override embedding configuration
  • llm_config: Override LLM configuration
  • reindex_on_mismatch: Rebuild vectorstore if dimensions mismatch
  • decrypt_key: Decryption password

pack.save(path, encrypt_key=None)

Save pack to .rag file.

pack.query(question, top_k=3)

Retrieve relevant chunks (no LLM).

pack.ask(question, top_k=4, temperature=0.0)

Ask question with LLM.

Provider Wrappers

# Direct provider access
from ragpackai.embeddings import OpenAI, HuggingFace, Google
from ragpackai.llms import OpenAIChat, GoogleChat, GroqChat

# Create embedding provider
embeddings = OpenAI(model_name="text-embedding-3-large")
vectors = embeddings.embed_documents(["Hello world"])

# Create LLM provider
llm = OpenAIChat(model_name="gpt-4o", temperature=0.7)
response = llm.invoke("What is AI?")

๐Ÿ”ง Configuration

Environment Variables

# API Keys
export OPENAI_API_KEY="your-key"
export GOOGLE_CLOUD_PROJECT="your-project"
export GROQ_API_KEY="your-key"
export CEREBRAS_API_KEY="your-key"

# Optional
export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"

Configuration Files

# Custom embedding config
embedding_config = {
    "provider": "huggingface",
    "model_name": "all-mpnet-base-v2",
    "device": "cuda"  # Use GPU
}

# Custom LLM config
llm_config = {
    "provider": "openai",
    "model_name": "gpt-4o",
    "temperature": 0.7,
    "max_tokens": 2000
}

๐Ÿ”’ Security

Encryption

ragpackai supports AES-GCM encryption for sensitive data:

# Save with encryption
pack.save("sensitive.rag", encrypt_key="strong-password")

# Load encrypted pack
pack = ragpackai.load("sensitive.rag", decrypt_key="strong-password")

Best Practices

  • Use strong passwords for encryption
  • Store API keys securely in environment variables
  • Validate .rag files before loading in production
  • Consider network security when sharing packs

๐Ÿงช Examples

See the examples/ directory for complete examples:

  • basic_usage.py - Simple pack creation and querying
  • provider_overrides.py - Using different providers
  • encryption_example.py - Working with encrypted packs
  • cli_examples.sh - Command-line usage examples

๐Ÿค Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ†˜ Support

๐Ÿ™ Acknowledgments

Built with:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ragpackai-0.1.1.tar.gz (41.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ragpackai-0.1.1-py3-none-any.whl (33.8 kB view details)

Uploaded Python 3

File details

Details for the file ragpackai-0.1.1.tar.gz.

File metadata

  • Download URL: ragpackai-0.1.1.tar.gz
  • Upload date:
  • Size: 41.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for ragpackai-0.1.1.tar.gz
Algorithm Hash digest
SHA256 eb41463847150a288555461187ae0dde4a7ea6c3d4c5fc3e1b0be1b1ac1f6040
MD5 79fcf1460c2add4c11414aee5f0dddc4
BLAKE2b-256 b850b517db58f584d0bc2f66b717ffcf229ed91d7d71e3dd164a6954c38d1581

See more details on using hashes here.

File details

Details for the file ragpackai-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: ragpackai-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 33.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for ragpackai-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8c9e0fa515ecb40967241d5d5a5d3a1d7c20871428decdde3b1b40eccbe35ef3
MD5 7ed3e76672bc777f27e0fe157f7e7e41
BLAKE2b-256 9a51a5b4133acddac5280d586034c927cad16b5b256c74f7453ab5a977bdcf12

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page