Skip to main content

A Python package for synthesizing and working with document data.

Project description

Docs2Synth

Documentation Video Tutorial License: MIT Python 3.11+

A Synthetic Data Tuned Retriever Framework for Visually Rich Documents Understanding

Docs2Synth Framework

Workflow

Documents → Preprocess → QA Generation → Verification →
Human Annotation → Retriever Training → RAG Deployment

🚀 Quick Start: Automated Pipeline

Run the complete end-to-end pipeline with a single command:

docs2synth run

This automatically chains: preprocessing → QA generation → verification → retriever training → validation → RAG deployment, skipping the manual annotation UI.

Manual Step-by-Step Workflow

For more control, run each step individually:

# 1. Preprocess documents
docs2synth preprocess data/raw/my_documents/

# 2. Generate QA pairs
docs2synth qa batch

# 3. Verify quality
docs2synth verify batch

# 4. Annotate (opens UI)
docs2synth annotate

# 5. Train retriever
docs2synth retriever preprocess
docs2synth retriever train --mode standard --lr 1e-5 --epochs 10

# 6. Deploy RAG
docs2synth rag ingest
docs2synth rag app

Complete Workflow Guide →


Installation

PyPI Installation (Recommended)

CPU Version (includes all features + MCP server):

pip install docs2synth[cpu]

GPU Version (includes all features + MCP server):

# Standard GPU installation (no vLLM)
pip install docs2synth[gpu]

# With vLLM for local LLM inference (requires CUDA GPU)
# 1. Install PyTorch with CUDA first:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128

# 2. Install docs2synth with vLLM:
pip install docs2synth[gpu,vllm]

# 3. Uninstall paddlex to avoid conflicts with vLLM:
pip uninstall -y paddlex

Note: PaddleX conflicts with vLLM. If you need vLLM support, you must uninstall paddlex after installation.

Minimal Install (CLI only, no ML/MCP features):

pip install docs2synth

Development Setup

Use the setup script (installs uv + dependencies automatically):

# Clone
git clone https://github.com/AI4WA/Docs2Synth.git
cd Docs2Synth

# Run setup script
./setup.sh         # Unix/macOS/WSL
# setup.bat        # Windows

The script:

  • Installs uv (fast package manager)
  • Creates virtual environment
  • Installs dependencies (CPU or GPU)
  • Sets up config

Manual development setup:

# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh  # Unix/macOS
# powershell -c "irm https://astral.sh/uv/install.ps1 | iex"  # Windows

# Clone and setup
git clone https://github.com/AI4WA/Docs2Synth.git
cd Docs2Synth
uv venv
source .venv/bin/activate  # .venv\Scripts\activate on Windows

# Install for development
uv pip install -e ".[cpu,dev]"  # or [gpu,dev] for GPU

# Setup config
cp config.example.yml config.yml
# Edit config.yml and add your API keys

Features

  • Document Processing: Extract text/layout with Docling, PaddleOCR, PDFPlumber
  • QA Generation: Automatic question-answer pair generation with LLMs
  • Verification: Built-in meaningful and correctness verifiers
  • Human Annotation: Streamlit UI for manual review
  • Retriever Training: Train LayoutLMv3-based retrievers
  • RAG Deployment: Deploy with naive or iterative strategies
  • MCP Integration: Expose as Model Context Protocol server

Configuration

Create config.yml from config.example.yml:

# API keys (config.yml is in .gitignore)
agent:
  keys:
    openai_api_key: "sk-..."
    anthropic_api_key: "sk-ant-..."

# Document processing
preprocess:
  processor: docling
  input_dir: ./data/raw/
  output_dir: ./data/processed/

# QA generation
qa:
  strategies:
    - strategy: semantic
      provider: openai
      model: gpt-4o-mini

# Retriever training
retriever:
  learning_rate: 1e-5
  epochs: 10

# RAG
rag:
  embedding:
    model: sentence-transformers/all-MiniLM-L6-v2

Docker

# CPU
./scripts/build-docker.sh cpu

# GPU
./scripts/build-docker.sh gpu

See Docker Builds


Documentation

Full documentation: https://ai4wa.github.io/Docs2Synth/


Contributing

We welcome contributions! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Run tests: pytest tests/ -v
  5. Run code quality checks: ./scripts/check.sh
  6. Submit a pull request

See Dependency Management for dev setup details.


License

MIT License - see LICENSE file for details.


Citation

If you use Docs2Synth in your research, please cite:

@software{docs2synth2025,
  title = {Docs2Synth: A Synthetic Data Tuned Retriever Framework for Visually Rich Documents Understanding},
  author = {AI4WA Team},
  year = {2025},
  url = {https://github.com/AI4WA/Docs2Synth}
}

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

docs2synth-1.1.1.tar.gz (261.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

docs2synth-1.1.1-py3-none-any.whl (277.2 kB view details)

Uploaded Python 3

File details

Details for the file docs2synth-1.1.1.tar.gz.

File metadata

  • Download URL: docs2synth-1.1.1.tar.gz
  • Upload date:
  • Size: 261.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for docs2synth-1.1.1.tar.gz
Algorithm Hash digest
SHA256 905f1443d4f2f2fe55e740d6b4716b294e7d4324fe0930df2f3f079f572fb77e
MD5 2746091ee6870d7375383c2e0c140a4f
BLAKE2b-256 ca48d4fd359a8f7cfdfcddc75a51f8d748a2a9005aa5f15aae9ddf9c68451f3b

See more details on using hashes here.

File details

Details for the file docs2synth-1.1.1-py3-none-any.whl.

File metadata

  • Download URL: docs2synth-1.1.1-py3-none-any.whl
  • Upload date:
  • Size: 277.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for docs2synth-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 48216edd65c8542e89c54cb68d508088be1b70093427bdc897e06380c36a953b
MD5 8a92bffece1e704ea5059bf820c7e088
BLAKE2b-256 754a05a188d505ac3c1d3b840a8e5d68963187f93b157854253ee5217993aa5b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page