Generate high-quality QA datasets to evaluate RAG systems
Project description
RAGScore
Generate high-quality QA datasets to evaluate your RAG systems
RAGScore automatically generates question-answer pairs from your documents, which you can then use to benchmark and evaluate your RAG (Retrieval-Augmented Generation) systems.
โจ Features
- ๐ Multi-format support - PDF, TXT, Markdown, HTML
- ๐ Multi-language - English and Chinese out of the box
- ๐ค Multi-provider - OpenAI, DashScope (Qwen), or any OpenAI-compatible API
- ๐ฏ Difficulty levels - Easy, medium, and hard questions
- ๐ Simple CLI - Easy command-line interface
- โก Fast indexing - FAISS-powered vector search
๐ Quick Start
Installation
# Basic installation through pypi
pip install ragscore
# Basic installation through source
python -m pip install -e.
# With OpenAI support
pip install ragscore[openai]
# With DashScope support (Chinese users)
pip install ragscore[dashscope]
# All providers
pip install ragscore[all]
Note: On first run, RAGScore automatically downloads required NLTK data (~35MB). This only happens once.
Setup API Key
# For OpenAI
export OPENAI_API_KEY="your-openai-key"
# For DashScope (Alibaba Cloud)
export DASHSCOPE_API_KEY="your-dashscope-key"
Generate QA Pairs
# Place documents in data/docs/, then:
ragscore generate
Output
Generated QA pairs are saved to output/generated_qas.jsonl:
{
"id": "abc123",
"question": "What is RAG?",
"answer": "RAG (Retrieval-Augmented Generation) combines information retrieval with text generation...",
"difficulty": "easy",
"source_path": "docs/rag_intro.pdf"
}
๐ Usage
Command Line
# Generate QA pairs from documents
ragscore generate
# Force re-indexing of documents
ragscore generate --force-reindex
# Use specific provider
ragscore generate --provider openai --model gpt-4o
Python API
from ragscore.pipeline import run_pipeline
from ragscore.data_processing import read_docs
from ragscore.llm import generate_qa_for_chunk
# Run full pipeline
run_pipeline(force_reindex=True)
# Or use individual components
docs = read_docs(dir_path="./my_docs")
for doc in docs:
qas = generate_qa_for_chunk(doc["text"], difficulty="medium", n=5)
print(qas)
โ๏ธ Configuration
Create a .env file or set environment variables:
# LLM Provider (auto-detected from available API keys)
DASHSCOPE_API_KEY="your-key" # For DashScope/Qwen
OPENAI_API_KEY="your-key" # For OpenAI
# Optional: Custom settings
RAGSCORE_CHUNK_SIZE=512
RAGSCORE_QUESTIONS_PER_CHUNK=5
๐ Supported LLM Providers
RAGScore works with any LLM provider - use your own API keys!
| Provider | Models | Environment Variable |
|---|---|---|
| OpenAI | gpt-4o, gpt-4o-mini, gpt-3.5-turbo | OPENAI_API_KEY |
| Anthropic | claude-3-opus, claude-3-sonnet, claude-3-haiku | ANTHROPIC_API_KEY |
| Groq | llama-3.1-70b, mixtral (ultra fast!) | GROQ_API_KEY |
| Together AI | llama-3, mistral, many open models | TOGETHER_API_KEY |
| Grok (xAI) | grok-beta | XAI_API_KEY |
| Mistral | mistral-large, mistral-medium | MISTRAL_API_KEY |
| DeepSeek | deepseek-chat, deepseek-coder | DEEPSEEK_API_KEY |
| DashScope | qwen-turbo, qwen-plus, qwen-max | DASHSCOPE_API_KEY |
| Ollama | llama2, mistral, codellama (local!) | No key needed |
| Custom | Any OpenAI-compatible endpoint | LLM_BASE_URL |
Using Ollama (Free, Local)
# Install Ollama: https://ollama.ai
ollama pull llama2
ollama serve
# RAGScore auto-detects Ollama
ragscore generate
Using Custom Endpoints
# Any OpenAI-compatible API (vLLM, LocalAI, etc.)
export LLM_BASE_URL="http://localhost:8000/v1"
export LLM_MODEL="my-model"
ragscore generate
๐ Project Structure
ragscore/
โโโ data/docs/ # Place your documents here
โโโ output/ # Generated QA pairs and index
โ โโโ generated_qas.jsonl
โ โโโ index.faiss
โ โโโ meta.json
โโโ src/ragscore/ # Source code
โโโ cli.py # Command-line interface
โโโ pipeline.py # Main pipeline
โโโ data_processing.py
โโโ vector_store.py
โโโ llm.py
โโโ providers/ # LLM provider implementations
๐ RAGScore Pro (Coming Soon)
Need to evaluate your RAG system? RAGScore Pro offers:
- ๐ Hallucination Detection - Catch when your RAG makes things up
- ๐ Citation Quality Scoring - Verify source attribution accuracy
- ๐ Multi-dimensional Scoring - Accuracy, relevance, completeness
- ๐ Executive Reports - Excel reports for stakeholders
- โก API Access - Integrate evaluation into your CI/CD
๐งช Development
# Clone repository
git clone https://github.com/HZYAI/RagScore.git
cd RagScore
# Install with dev dependencies
pip install -e ".[dev,all]"
# Run tests
pytest
# Run linting
ruff check src/
black --check src/
๐ค Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
๐ License
Apache 2.0 License - see LICENSE for details.
๐ Links
Made with โค๏ธ for the RAG community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ragscore-0.1.0.tar.gz.
File metadata
- Download URL: ragscore-0.1.0.tar.gz
- Upload date:
- Size: 35.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8a85fa1910fe279a9482e08d59536474aa733711d55f726f8e2ef380e52f12d9
|
|
| MD5 |
cc62710ba472d7efb4fa429e90c3703b
|
|
| BLAKE2b-256 |
64ec741551e1acb254383291552ff6b3a04852b01103980b9cf8a6ae086b17ca
|
File details
Details for the file ragscore-0.1.0-py3-none-any.whl.
File metadata
- Download URL: ragscore-0.1.0-py3-none-any.whl
- Upload date:
- Size: 33.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fc87ed0a500efa8792080ab8d96ec2536d2ac37b5bf5e355ede84a55fbb83903
|
|
| MD5 |
e6d1200b61d24a3ee4084c0da9100ed3
|
|
| BLAKE2b-256 |
b93ddc621cda928df87aeb1fdc30f3227f955ea6060c0a7fc881fb0949b8da26
|