Skip to main content

A RAG system for creating knowledge bases from different document formats

Project description

Ragora

PyPI version Python versions License GitHub stars

Build smarter, grounded, and transparent AI with Ragora.

Ragora is an open-source framework for building Retrieval-Augmented Generation (RAG) systems that connect your language models to real, reliable knowledge. It provides a clean, composable interface for managing knowledge bases, document retrieval, and grounding pipelines, so your AI can reason with context instead of guesswork.

The name Ragora blends RAG with the ancient Greek Agora, the public square where ideas were exchanged, debated, and refined. In the same spirit, Ragora is the meeting place of data and dialogue, where your information and your AI come together to think.

✨ Key Features

  • 📄 Specialized Document Processing: Native support for LaTeX parsing and email handling with more formats coming
  • 🏗️ Clean Architecture: Three-layer design (DatabaseManager → VectorStore → Retriever) for maintainability
  • 🔍 Flexible Search: Vector, keyword, and hybrid search modes for optimal retrieval
  • 🧩 Composable Components: Use high-level APIs or build custom pipelines with low-level components
  • ⚡ Performance Optimized: Batch processing, GPU acceleration, and efficient vector search with Weaviate
  • 🔒 Privacy-First: Run completely local with sentence-transformers and Weaviate

🚀 Installation

pip install ragora

Prerequisites

You need a Weaviate instance running. Download the pre-configured Ragora database server:

# Download from GitHub releases
wget https://github.com/vahidlari/aiapps/releases/latest/download/ragora-database-server.tar.gz

# Extract and start
tar -xzf ragora-database-server.tar.gz
cd ragora-database-server
./database-manager.sh start

The database server is a zero-dependency solution (only requires Docker) that works on Windows, macOS, and Linux.

🎯 Quick Start

from ragora import KnowledgeBaseManager

# Initialize the knowledge base manager
kbm = KnowledgeBaseManager(
    weaviate_url="http://localhost:8080",
    class_name="Documents",
    embedding_model="all-mpnet-base-v2"
)

# Process documents
document_paths = ["paper1.tex", "paper2.tex"]
chunk_ids = kbm.process_documents(document_paths)
print(f"Processed {len(chunk_ids)} chunks")

# Query the knowledge base
results = kbm.query(
    "What is quantum entanglement?",
    search_type="hybrid",
    top_k=5
)

# Display results
for result in results['chunks']:
    print(f"Score: {result['similarity_score']:.3f}")
    print(f"Content: {result['content'][:200]}...\n")

📚 Core Concepts

Three-Layer Architecture

Ragora uses a clean three-layer architecture that separates concerns:

  1. DatabaseManager (Infrastructure Layer): Low-level Weaviate operations
  2. VectorStore (Storage Layer): Document storage and CRUD operations
  3. Retriever (Search Layer): Search algorithms and query processing

This design provides flexibility, testability, and makes it easy to extend or swap components.

Document Processing

Process LaTeX documents with specialized handling:

from ragora.core import DocumentPreprocessor, DataChunker

# Parse LaTeX with citations
preprocessor = DocumentPreprocessor()
document = preprocessor.parse_latex(
    "paper.tex",
    bibliography_path="references.bib"
)

# Chunk with configurable size and overlap
chunker = DataChunker(chunk_size=768, overlap=100)
chunks = chunker.chunk_text(document.content)

🔍 Search Modes

Ragora supports three search strategies:

# Semantic search (best for conceptual queries)
results = kbm.query("explain machine learning", search_type="similar")

# Keyword search (best for exact terms)
results = kbm.query("Schrödinger equation", search_type="keyword")

# Hybrid search (recommended - combines both)
results = kbm.query("neural networks", search_type="hybrid", alpha=0.7)

🎯 Use Cases

  • 📖 Academic Research: Build knowledge bases from scientific papers and LaTeX documents
  • 📝 Documentation Search: Create searchable knowledge bases from technical documentation
  • 🤖 AI Assistants: Ground LLM responses in your specific domain knowledge
  • 💬 Question Answering: Build Q&A systems over your document collections
  • 🔬 Literature Review: Efficiently search and synthesize information from research papers

🔧 Advanced Usage

Custom Pipeline

Build custom RAG pipelines with low-level components:

from ragora.core import (
    DatabaseManager,
    VectorStore,
    Retriever,
    EmbeddingEngine
)

# Initialize components
db_manager = DatabaseManager(url="http://localhost:8080")
vector_store = VectorStore(db_manager, class_name="MyDocs")
retriever = Retriever(db_manager, class_name="MyDocs")
embedder = EmbeddingEngine(model_name="all-mpnet-base-v2")

# Build custom workflow
embeddings = embedder.embed_batch(texts)
vector_store.store_chunks(chunks)
results = retriever.search_hybrid(query, alpha=0.7, top_k=10)

Multiple Search Strategies

Compare different search approaches:

# Semantic search for conceptual similarity
semantic = retriever.search_similar(
    "artificial intelligence applications",
    top_k=5
)

# Keyword search for exact matches
keyword = retriever.search_keyword(
    "neural network architecture",
    top_k=5
)

# Hybrid search with custom weighting
hybrid = retriever.search_hybrid(
    "deep learning models",
    alpha=0.7,  # 70% vector, 30% keyword
    top_k=5
)

# Search with metadata filters
filtered = retriever.search_with_filter(
    "quantum mechanics",
    filters={"author": "Feynman", "year": 1965},
    top_k=5
)

📖 Documentation & Examples

  • Getting Started Guide: Detailed installation and setup guide
  • API Reference: Complete API documentation
  • Examples Directory: Working code examples
    • advanced_usage.py: Advanced features and custom pipelines
    • basic_usage.py: Basic usage examples
    • email_usage_examples.py: Email integration examples

📊 Requirements

  • Python: 3.11 or higher
  • Weaviate: 1.22.0 or higher (for vector storage)
  • Dependencies: See requirements.txt

🤝 Contributing

We welcome contributions! Please see our Contributing Guidelines for:

  • Setting up your development environment
  • Code style and standards
  • Writing tests
  • Submitting pull requests

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🔗 Links

📮 Contact

For questions, feedback, or collaboration opportunities:

  • Open an issue on GitHub
  • Start a discussion in GitHub Discussions
  • Contact the maintainers directly

Build smarter, grounded, and transparent AI with Ragora.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ragora-1.0.0.tar.gz (122.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ragora-1.0.0-py3-none-any.whl (63.1 kB view details)

Uploaded Python 3

File details

Details for the file ragora-1.0.0.tar.gz.

File metadata

  • Download URL: ragora-1.0.0.tar.gz
  • Upload date:
  • Size: 122.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ragora-1.0.0.tar.gz
Algorithm Hash digest
SHA256 fdae743a67d99ee09b6e18a7aa4ffb4cd0aeda87d30d4a1ce440316793c3340f
MD5 15b44e1b2f1bc07e218f2c0512107fab
BLAKE2b-256 3f64edf09e3212019d51e3b9141762d0fd53dda0b9c08b9488fa1173ba46929b

See more details on using hashes here.

File details

Details for the file ragora-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: ragora-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 63.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ragora-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5e66f779faeafbdfd338b04009fb6fe3fdbb3685e01b5e5bfda19f507089590c
MD5 157421fb7388fa0c072beb0cf67e9fe9
BLAKE2b-256 29889865e751ead4be23a46db5f3a1f891a2b033c2fd8c4d50ea6880141c6499

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page