Skip to main content

Vietnamese Retrieval-Augmented Generation (RAG) Framework

Project description

Vi-RAG Framework

Vietnamese Retrieval-Augmented Generation Framework

Một framework RAG toàn diện được thiết kế đặc biệt cho tiếng Việt, hỗ trợ xử lý tài liệu PDF, TXT, DOCX với khả năng chunking phân cấp và tìm kiếm ngữ nghĩa.

Python 3.8+ License: MIT

🌟 Tính Năng Chính

  • Hỗ trợ đa định dạng: PDF, TXT, DOCX
  • Chunking thông minh: Phân cấp parent-child chunks với overlap
  • Vector Search: Tích hợp Qdrant cho tìm kiếm ngữ nghĩa
  • Gemini Integration: Sử dụng Gemini API cho embedding và generation
  • In-memory Caching: Cache DocumentNode để tăng tốc độ xử lý
  • Auto-chunking: Tự động load và chunk documents trong 1 bước
  • Tiếng Việt native: Được thiết kế tối ưu cho tiếng Việt

📁 Cấu Trúc Project

Vi-RAG/
├── src/
│   └── vi_rag/                  # Main package
│       ├── __init__.py
│       ├── core.py              # Core RAG functionality
│       ├── utils.py             # Utility functions
│       └── py.typed             # Type hints marker
│
├── test/                        # Tests
│   └── test_basic.py
│
├── pyproject.toml               # Project configuration
├── README.md                    # This file
├── LICENSE                      # MIT License
└── .gitignore

Package Installation

# Install in development mode
pip install -e .

# Install with development dependencies
pip install -e ".[dev]"

# Install with evaluation tools
pip install -e ".[evaluation]"

# Install all optional dependencies
pip install -e ".[all]"

🚀 Cài Đặt Nhanh

1. Clone Repository

git clone https://github.com/NOT-erorr/PBL_2025_Vi-RAG_framework.git
cd Vi-RAG

2. Tạo Virtual Environment

python -m venv venv

# Windows
venv\Scripts\activate

# Linux/Mac
source venv/bin/activate

3. Cài Package

# Basic installation
pip install -e .

# With development tools
pip install -e ".[dev]"

# With all dependencies
pip install -e ".[all]"

💡 Sử Dụng Cơ Bản

Example 1: Load và Chunk Document Tự Động

from vi_rag import DocumentLoader

# Auto-chunking (khuyến nghị)
loader = DocumentLoader(
    "document.pdf",
    auto_chunk=True,
    parent_size=2000,
    child_size=400,
    overlap=50
)

# Load và chunk trong 1 bước
document, parents, children = loader.load_and_chunk()

print(f"Loaded: {document.title}")
print(f"Parent chunks: {len(parents)}")
print(f"Child chunks: {len(children)}")

Example 2: Workflow Hoàn Chỉnh RAG

from vi_rag.ingestion import DocumentLoader
from vi_rag.models import GeminiEmbeddingModel, GeminiLLMClient
from vi_rag.retrieval import QdrantVectorStore
GEMINI_API_KEY = ''
QDRANT_API_KEY = ''
QDRANT_URL = ''

import uuid

# 1. Load và chunk document
loader = DocumentLoader("document.pdf", auto_chunk=True)
document, parents, children = loader.load_and_chunk()

# 2. Setup models
embedding_model = GeminiEmbeddingModel(GEMINI_API_KEY, output_dimensionality=768)
llm = GeminiLLMClient(GEMINI_API_KEY, model_name="gemini-2.0-flash-exp")

# 3. Generate embeddings
child_texts = [child['text'] for child in children]
vectors = embedding_model.embed_documents(child_texts)

# 4. Setup và index vào vector store
vector_store = QdrantVectorStore(api_key=QDRANT_API_KEY, url=QDRANT_URL)
vector_store.connect()
vector_store.ensure_collection()

# Add IDs
for child in children:
    child['id'] = str(uuid.uuid4())

vector_store.add_vectors(
    vectors=vectors,
    payloads=children,
    ids=[c['id'] for c in children]
)

# 5. Query và generate answer
question = "Tài liệu này nói về gì?"
query_vector = embedding_model.embed_query(question)
results = vector_store.search(query_vector, top_k=5)
context = "\n\n".join([r['text'] for r in results])

answer = llm.generate(query=question, context=context)
print(f"Câu hỏi: {question}")
print(f"Trả lời: {answer}")

Example 3: Xử Lý Document Cache

from vi_rag.ingestion import DocumentLoader

loader = DocumentLoader("document.pdf")

# Check cache trước khi load
cached = loader.check_document_loaded()
if cached:
    print("Document đã được load trước đó!")
    document = cached
else:
    print("Loading document mới...")
    document = loader.load()

Example 4: Xử Lý Nhiều Documents

from vi_rag.ingestion import DocumentLoader
import uuid

documents = ["doc1.pdf", "doc2.txt", "doc3.docx"]
all_children = []

# Load tất cả documents
for doc_path in documents:
    loader = DocumentLoader(doc_path, auto_chunk=True)
    doc, parents, children = loader.load_and_chunk()
    
    # Add source metadata
    for child in children:
        child['id'] = str(uuid.uuid4())
        child['source_file'] = doc_path
    
    all_children.extend(children)

print(f"Total chunks from all documents: {len(all_children)}")

# Embed và index tất cả
texts = [c['text'] for c in all_children]
vectors = embedding_model.embed_documents(texts)
vector_store.add_vectors(vectors, all_children, [c['id'] for c in all_children])

Example 5: Load Document Không Auto-Chunk

from vi_rag.ingestion import DocumentLoader, HierarchicalChunker

# Load document only
loader = DocumentLoader("document.pdf", auto_chunk=False)
document, _, _ = loader.load_and_chunk()  # Empty lists returned

# Chunk thủ công sau
chunker = HierarchicalChunker(
    parent_size=3000,  # Custom size
    child_size=500,
    overlap=100
)
parents, children = chunker.build_chunks(document)

Example 6: Query với Filtering

from qdrant_client.models import Filter, FieldCondition, MatchValue

# Search với filter theo source file
results = vector_store.client.search(
    collection_name=vector_store.collection_name,
    query_vector=query_vector,
    limit=5,
    query_filter=Filter(
        must=[
            FieldCondition(
                key="source_file",
                match=MatchValue(value="important_doc.pdf")
            )
        ]
    )
)

Example 7: Multilingual - Tiếng Việt

from vi_rag.ingestion import DocumentLoader
from vi_rag.models import GeminiLLMClient

# Load Vietnamese document
loader = DocumentLoader("tai_lieu_tieng_viet.pdf", auto_chunk=True)
document, parents, children = loader.load_and_chunk()

# Query bằng tiếng Việt
question = "Nội dung chính của tài liệu là gì?"
results = vector_store.search(query_vector, top_k=5)
context = "\n\n".join([r['text'] for r in results])

# Generate với instruction tiếng Việt
llm = GeminiLLMClient(GEMINI_API_KEY)
answer = llm.generate(
    query=question,
    context=context
)

print(f"Trả lời: {answer}")

Example 8: Batch Processing với Retry

from vi_rag.models import GeminiEmbeddingModel
import time

embedding_model = GeminiEmbeddingModel(GEMINI_API_KEY)

def embed_with_retry(texts, max_retries=3):
    """Embed với retry logic"""
    for attempt in range(max_retries):
        try:
            return embedding_model.embed_documents(texts)
        except Exception as e:
            if attempt < max_retries - 1:
                wait_time = 2 ** attempt  # Exponential backoff
                print(f"Retry {attempt + 1}/{max_retries} sau {wait_time}s...")
                time.sleep(wait_time)
            else:
                raise e

# Batch processing
batch_size = 100
all_vectors = []

for i in range(0, len(child_texts), batch_size):
    batch = child_texts[i:i + batch_size]
    vectors = embed_with_retry(batch)
    all_vectors.extend(vectors)
    print(f"Processed {i + len(batch)}/{len(child_texts)}")

📖 Ví Dụ Hoàn Chỉnh

Xem testing/code/demo/complete_example.py để có ví dụ đầy đủ về workflow:

python -m testing.code.demo.complete_example

🏗️ Kiến Trúc Hệ Thống

graph TD
    A[Documents] -->|Load| B[DocumentLoader]
    B -->|Chunk| C[HierarchicalChunker]
    C -->|Parent/Child Chunks| D[Embedding Model]
    D -->|Vectors| E[QdrantVectorStore]
    F[User Query] -->|Embed| D
    D -->|Query Vector| E
    E -->|Search| G[Retrieved Contexts]
    G -->|Context| H[LLM Client]
    F -->|Query| H
    H -->|Answer| I[User]

📊 Key Components

1. Document Loading

  • PDFLoader: Xử lý PDF với PyPDF hoặc PyMuPDF
  • TXTLoader: Hỗ trợ nhiều encoding
  • DOCXLoader: Xử lý Word documents
  • MD5 Caching: Tự động phát hiện duplicate documents

2. Chunking

  • HierarchicalChunker: Tạo parent-child chunks
  • Configurable: Tùy chỉnh size và overlap
  • Context Preservation: Giữ ngữ cảnh qua overlap

3. Embedding

  • GeminiEmbeddingModel: Sử dụng Gemini embedding-001
  • 768 dimensions: Tối ưu cho tiếng Việt
  • Batch processing: Xử lý hàng loạt hiệu quả

4. Vector Storage

  • QdrantVectorStore: Integration với Qdrant Cloud/Local
  • COSINE similarity: Đo độ tương đồng ngữ nghĩa
  • Metadata storage: Lưu trữ thông tin bổ sung

5. Generation

  • GeminiLLMClient: Multi-model support
  • PromptBuilder: Template-based prompts
  • Context-aware: Generate dựa trên retrieved context

🧪 Testing

Run Basic Tests

# Test document loading
python -m testing.code.demo.example_usage

# Test complete workflow
python -m testing.code.demo.complete_example

Run Unit Tests (if available)

pytest tests/

📚 Documentation

🔧 Configuration

Environment Variables

Variable Description Default
GEMINI_API_KEY Gemini API key Required
QDRANT_API_KEY Qdrant API key Required
QDRANT_URL Qdrant server URL Required
QDRANT_COLLECTION_NAME Collection name rag_documents
EMBEDDING_DIM Embedding dimension 768
QDRANT_VECTOR_DIM Vector dimension 768
VECTOR_TOP_K Top K results 5

Chunking Parameters

DocumentLoader(
    file_path="document.pdf",
    auto_chunk=True,
    parent_size=2000,    # Parent chunk size
    child_size=400,      # Child chunk size
    overlap=50           # Overlap between chunks
)

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

📧 Contact

🙏 Acknowledgments

  • Google Gemini API for embeddings and generation
  • Qdrant for vector storage
  • Contributors and testers

Made with ❤️ for Vietnamese NLP community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vi_rag-0.1.2.tar.gz (26.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vi_rag-0.1.2-py3-none-any.whl (26.3 kB view details)

Uploaded Python 3

File details

Details for the file vi_rag-0.1.2.tar.gz.

File metadata

  • Download URL: vi_rag-0.1.2.tar.gz
  • Upload date:
  • Size: 26.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for vi_rag-0.1.2.tar.gz
Algorithm Hash digest
SHA256 84a732ece678d6ef994d777d5cbe9233d19837901a4486317e948b8db478409f
MD5 2e88511fb000c336db07acd4da6d015c
BLAKE2b-256 992a22fcc37633507f6fc30226efc9a358f84c4d73c5c2020fa882afdb3785ea

See more details on using hashes here.

File details

Details for the file vi_rag-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: vi_rag-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 26.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for vi_rag-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e9ae5047266eadad2811f49a24448d382c10dd6c851fe60d311bf915082804b2
MD5 de5d195cc4d35583a68c092c1d15b801
BLAKE2b-256 3abdeb780c193a7cb93aed978a37cdbe5635c7b9452db0e97bd7c5d15df62fd6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page