CLI tool for extracting text with DeepSeek OCR and generating datasets
Project description
Book Data Maker
A powerful CLI tool for extracting text from documents using DeepSeek OCR and generating high-quality datasets with LLM assistance.
Table of Contents
๐ Getting Started
๐ User Guide
๐ง Advanced
๐ Reference
Features
- ๐ Multi-Format Support: PDF, EPUB, and images
- ๐ Self-Hosted OCR: Local transformers for DeepSeek-OCR (no API costs)
- ๐ค Parallel Generation: Multiple LLM threads explore documents simultaneously
- ๐ฏ Smart Distribution: Control thread starting positions
- ๐พ SQLite Storage: Real-time dataset storage with flexible export
- ๐ Multiple Formats: JSONL, Parquet, CSV, JSON
- ๐ Flexible Modes: API or self-hosted for both stages
- ๐ Progress Tracking: Real-time progress bars
- โก Resume Support: Continue interrupted sessions
Quick Start
Prerequisites
# Set API keys (choose one based on your mode)
export OPENAI_API_KEY=your_openai_key # For API mode
export DEEPSEEK_API_KEY=your_deepseek_key # For API OCR mode
Option 1: API Mode (Fastest Setup)
# 1. Install
pip install -r requirements.txt && pip install -e .
# 2. Extract โ Generate โ Export
bookdatamaker extract book.pdf -o ./extracted
bookdatamaker generate ./extracted/combined.txt -d dataset.db --distribution "10,10,20,30,20,10"
bookdatamaker export-dataset dataset.db -o output.parquet
Option 2: Self-Hosted Mode (Free, Private)
# 1. Install with local dependencies
pip install -r requirements.txt && pip install -e ".[local]"
# 2. Extract with local OCR
bookdatamaker extract book.pdf --mode local --batch-size 8 -o ./extracted
# 3. Generate with vLLM
bookdatamaker generate ./extracted/combined.txt \
--mode vllm \
--vllm-model-path meta-llama/Llama-3-8B-Instruct \
--distribution "25,25,25,25" \
-d dataset.db
# 4. Export
bookdatamaker export-dataset dataset.db -o output.parquet
Installation
Basic Installation
git clone https://github.com/yourusername/bookdatamaker.git
cd bookdatamaker
pip install -r requirements.txt
pip install -e .
Optional: Local Inference Support
# For self-hosted OCR and LLM generation
pip install -e ".[local]" # Installs transformers==4.46.3, torch, flash-attn, etc.
Note: The project requires transformers==4.46.3 for optimal compatibility with DeepSeek-OCR. A warning will be displayed if a different version is detected.
System Requirements
For API Mode:
- Python 3.10+
- API keys (OpenAI, DeepSeek, etc.)
For Local Mode:
- Python 3.10-3.12 (3.13 not supported due to vLLM compatibility)
- NVIDIA GPU with CUDA support (or CPU, though slower)
- 16GB+ VRAM recommended for GPU
- transformers==4.46.3
- Linux or WSL2 (recommended)
Extract Text (Stage 1)
Extract text from documents using DeepSeek OCR.
Supported Formats
- PDF: Text extraction or OCR from rendered pages
- EPUB: E-book text extraction
- Images: JPG, PNG, BMP, TIFF, WebP
API Mode
# Basic usage
bookdatamaker extract book.pdf -o ./extracted
# Custom API endpoint
bookdatamaker extract book.pdf \
--deepseek-api-url https://custom-api.example.com/v1 \
-o ./extracted
Local Mode (Transformers)
Use local transformers model for OCR (DeepSeek-OCR, no API calls):
# Basic usage - uses transformers AutoModel with flash_attention_2
bookdatamaker extract book.pdf --mode local -o ./extracted
# With custom batch size (adjust based on GPU memory)
bookdatamaker extract book.pdf --mode local --batch-size 12 -o ./extracted
# Use CPU instead of GPU
bookdatamaker extract book.pdf --mode local --device cpu -o ./extracted
# Use specific GPU
bookdatamaker extract book.pdf --mode local --device cuda:1 -o ./extracted
# Process directory of images
bookdatamaker extract ./images/ --mode local -o ./extracted
Batch Size Guidelines:
- 12-16: GPUs with 24GB+ VRAM
- 8-12: GPUs with 16GB+ VRAM (default: 8)
- 4-8: GPUs with 8-12GB VRAM
- 1-4: GPUs with <8GB VRAM
Device Options:
cuda(default): Use default CUDA GPUcuda:0,cuda:1, etc.: Use specific GPUcpu: Use CPU (slower, no GPU required)
Output Structure
./extracted/
โโโ page_001.txt
โโโ page_002.txt
โโโ ...
โโโ combined.txt # All pages with [PAGE_XXX] markers
Generate Dataset (Stage 2)
Generate Q&A datasets using parallel LLM threads.
Basic Usage
# 6 threads (from distribution), 20 Q&A pairs per thread
bookdatamaker generate combined.txt \
-d dataset.db \
--distribution "10,10,20,30,20,10" \
--datasets-per-thread 20
Key Concept: Thread count is determined by the number of comma-separated values in --distribution.
API Mode Examples
# OpenAI/Azure
bookdatamaker generate combined.txt \
-d dataset.db \
--openai-api-url https://api.openai.com/v1 \
--model gpt-4 \
--distribution "10,10,20,30,20,10"
# Custom API endpoint
bookdatamaker generate combined.txt \
--openai-api-url http://localhost:8000/v1 \
--model your-model-name \
--distribution "25,25,25,25"
vLLM Direct Mode (Self-Hosted)
Use vLLM directly without API server:
# Single GPU
bookdatamaker generate combined.txt \
--mode vllm \
--vllm-model-path meta-llama/Llama-3-8B-Instruct \
--distribution "25,25,25,25" \
-d dataset.db
# Multi-GPU (4 GPUs, 6 threads)
bookdatamaker generate combined.txt \
--mode vllm \
--vllm-model-path meta-llama/Llama-3-70B-Instruct \
--tensor-parallel-size 4 \
--distribution "10,10,20,30,20,10" \
-d dataset.db
Benefits of vLLM Mode:
- No API costs
- Full privacy (local processing)
- Optimized inference
- Thread-safe parallel processing
- Automatic batching
Custom Prompts
Add specific instructions to guide LLM behavior:
# Language specification
bookdatamaker generate combined.txt \
--custom-prompt "Generate all Q&A in Chinese with simplified characters"
# Format specification
bookdatamaker generate combined.txt \
--custom-prompt "Questions should be multiple-choice with 4 options"
# Multiple requirements
bookdatamaker generate combined.txt \
--custom-prompt "Requirements:
1. Generate questions in English
2. Focus on practical applications
3. Include code examples
4. Answer length: 50-150 words
5. Difficulty: intermediate"
Export Dataset
Export from SQLite database to your preferred format:
# Parquet (recommended for data analysis)
bookdatamaker export-dataset dataset.db -o output.parquet
# JSON Lines (easy to stream)
bookdatamaker export-dataset dataset.db -o output.jsonl -f jsonl
# CSV (Excel-friendly)
bookdatamaker export-dataset dataset.db -o output.csv -f csv
# JSON with metadata
bookdatamaker export-dataset dataset.db -o output.json -f json --include-metadata
Format Comparison:
| Format | Best For | Size | Load Speed |
|---|---|---|---|
| Parquet | Data analysis, ML | Smallest | Fastest |
| JSONL | Streaming, processing | Medium | Fast |
| CSV | Excel, spreadsheets | Largest | Medium |
| JSON | API responses | Large | Slow |
Position Distribution
Control where threads start in the document using distribution percentages.
How It Works
Document: 500 paragraphs
Distribution: "10,10,20,30,20,10" (6 threads)
Thread 0: Start at 0% โ Paragraph 1
Thread 1: Start at 10% โ Paragraph 50
Thread 2: Start at 20% โ Paragraph 100
Thread 3: Start at 50% โ Paragraph 250
Thread 4: Start at 70% โ Paragraph 350
Thread 5: Start at 80% โ Paragraph 400
Distribution Strategies
# Even distribution (4 threads)
--distribution "25,25,25,25"
# Start at: 0%, 25%, 50%, 75%
# Front-heavy (4 threads) - focus on beginning
--distribution "40,30,20,10"
# Start at: 0%, 40%, 70%, 90%
# Middle-heavy (5 threads) - focus on middle
--distribution "10,20,40,20,10"
# Start at: 0%, 10%, 30%, 70%, 90%
# Dense sampling (10 threads) - fine-grained coverage
--distribution "10,10,10,10,10,10,10,10,10,10"
Thread Count Guidelines
- Small documents (<100 paragraphs): 2-4 threads
- Medium documents (100-500 paragraphs): 4-8 threads
- Large documents (>500 paragraphs): 8-16 threads
Performance Tuning
Extraction (Stage 1)
Batch Size Optimization (Transformers):
# Maximum speed (24GB+ VRAM) - uses transformers with DeepSeek-OCR
bookdatamaker extract book.pdf --mode local --batch-size 16
# Balanced (16GB VRAM) - transformers default batch size
bookdatamaker extract book.pdf --mode local --batch-size 8
# Conservative (<8GB VRAM) - smaller batches for limited VRAM
bookdatamaker extract book.pdf --mode local --batch-size 4
# Use CPU if no GPU available (slower)
bookdatamaker extract book.pdf --mode local --device cpu --batch-size 2
Multi-GPU Setup:
# Use specific GPU in multi-GPU system
bookdatamaker extract book.pdf --mode local --device cuda:0
bookdatamaker extract book.pdf --mode local --device cuda:1
# Run multiple processes on different GPUs simultaneously
bookdatamaker extract book1.pdf --mode local --device cuda:0 &
bookdatamaker extract book2.pdf --mode local --device cuda:1 &
Generation (Stage 2)
Optimal Configurations:
# Maximum throughput (multi-GPU, 12 threads)
bookdatamaker generate text.txt --mode vllm \
--vllm-model-path meta-llama/Llama-3-70B \
--tensor-parallel-size 4 \
--distribution "5,5,10,10,15,15,15,15,5,5,2,3" \
--datasets-per-thread 50
# Balanced (single GPU, 6 threads)
bookdatamaker generate text.txt --mode vllm \
--vllm-model-path meta-llama/Llama-3-8B \
--distribution "10,10,20,30,20,10" \
--datasets-per-thread 20
# Conservative (2 threads)
bookdatamaker generate text.txt --mode vllm \
--vllm-model-path meta-llama/Llama-3-8B \
--distribution "50,50" \
--datasets-per-thread 10
Interactive Chat
Chat with an LLM that can access your document through MCP tools. Perfect for exploring documents interactively or testing Q&A generation.
Start Chat Session
# Basic chat with GPT-4
bookdatamaker chat combined.txt
# With vLLM server
bookdatamaker chat combined.txt \
--openai-api-url http://localhost:8000/v1 \
--model Qwen/Qwen3-4B-Thinking-2507
# With custom database
bookdatamaker chat combined.txt --db my_dataset.db
Example Interaction
๐ Document: combined.txt
๐ Paragraphs: 578
๐ค Model: gpt-4
You: What's in paragraph 100?
- `-f, --format`: Format: `jsonl`, `parquet`, `csv`, `json` (default: `parquet`)
- `--include-metadata`: Include timestamps
### Parameter Tables
#### extract Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `input_path` | required | - | Input file or directory |
| `--output-dir` | optional | `extracted_text` | Output directory |
| `--mode` | optional | `api` | OCR mode: `api` or `local` |
| `--batch-size` | optional | `8` | Batch size for local mode |
| `--device` | optional | `cuda` | Torch device for local mode: `cuda`, `cuda:0`, `cpu` |
| `--deepseek-api-key` | optional | env var | DeepSeek API key |
| `--deepseek-api-url` | optional | `https://api.deepseek.com/v1` | DeepSeek API URL |
| `--local-model-path` | optional | `deepseek-ai/DeepSeek-OCR` | Local model path |
#### generate Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `text_file` | required | - | Combined text file |
| `--db` | optional | `dataset.db` | Database file path |
| `--mode` | optional | `api` | LLM mode: `api` or `vllm` |
| `--distribution` | optional | `10,10,20,30,20,10` | Position distribution (determines threads) |
| `--datasets-per-thread` | optional | `10` | Target Q&A pairs per thread |
| `--openai-api-key` | optional | env var | OpenAI API key |
| `--openai-api-url` | optional | `https://api.openai.com/v1` | API URL |
| `--model` | optional | `gpt-4` | Model name |
| `--vllm-model-path` | optional | - | vLLM model path |
| `--tensor-parallel-size` | optional | `1` | Number of GPUs |
| `--custom-prompt` | optional | - | Additional instructions |
---
## Troubleshooting
### Common Issues
**Problem: Threads not completing**
- Reduce `--datasets-per-thread`
- Check API rate limits
- Verify API keys
- Ensure document has enough content
**Problem: Out of memory (OCR)**
- Reduce `--batch-size`
- Use `--device cpu` to run on CPU instead of GPU
- Use API mode instead of local
- Use specific GPU with `--device cuda:0` if you have multiple GPUs
**Problem: Out of memory (Generation)**
- Reduce thread count (fewer distribution values)
- Use smaller model
- Reduce `--tensor-parallel-size`
**Problem: Low quality Q&A pairs**
- Adjust distribution to focus on content-rich sections
- Use higher-quality model (e.g., GPT-4)
- Add specific `--custom-prompt` instructions
- Check OCR quality
**Problem: SQLite errors**
- Ensure database path is writable
- Don't modify database during generation
- Delete and regenerate if corrupted
### Debug Mode
Set environment variable for verbose logging:
```bash
export LOG_LEVEL=DEBUG
bookdatamaker generate combined.txt -d dataset.db
Development
Project Structure
bookdatamaker/
โโโ src/bookdatamaker/
โ โโโ cli.py # CLI interface
โ โโโ ocr/
โ โ โโโ extractor.py # OCR extraction
โ โ โโโ document_parser.py # Document parsing
โ โโโ mcp/
โ โ โโโ server.py # MCP server
โ โโโ llm/
โ โ โโโ parallel_generator.py # Parallel generation
โ โโโ dataset/
โ โ โโโ builder.py # Dataset building
โ โ โโโ dataset_manager.py # SQLite management
โ โโโ utils/
โ โโโ page_manager.py # Page navigation
โ โโโ status.py # Progress indicators
โโโ tests/ # Test files
Development Setup
# Clone repository
git clone https://github.com/yourusername/bookdatamaker.git
cd bookdatamaker
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/
# Code formatting
black src/
ruff check src/
# Type checking
mypy src/
Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new features
- Ensure all tests pass
- Submit a pull request
Testing
# Run all tests
pytest
# Run specific test file
pytest tests/test_ocr.py
# Run with coverage
pytest --cov=bookdatamaker tests/
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bookdatamaker-0.2.1.tar.gz.
File metadata
- Download URL: bookdatamaker-0.2.1.tar.gz
- Upload date:
- Size: 45.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
66a23941fb7c1bdc72dbe49acd5e178fcaea0d6ae67c64a5493a806139a13327
|
|
| MD5 |
9d44c098dc91647102a45d6cd1672510
|
|
| BLAKE2b-256 |
c7f658c9ed8be0f4541eb95389ecc73948254ede83b6a71fd8e3016673937846
|
Provenance
The following attestation bundles were made for bookdatamaker-0.2.1.tar.gz:
Publisher:
python-publish.yml on zwh20081/bookdatamaker
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
bookdatamaker-0.2.1.tar.gz -
Subject digest:
66a23941fb7c1bdc72dbe49acd5e178fcaea0d6ae67c64a5493a806139a13327 - Sigstore transparency entry: 699299591
- Sigstore integration time:
-
Permalink:
zwh20081/bookdatamaker@d2dba532fcb088f9e11e614f1ba85bf6578c68f6 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/zwh20081
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@d2dba532fcb088f9e11e614f1ba85bf6578c68f6 -
Trigger Event:
release
-
Statement type:
File details
Details for the file bookdatamaker-0.2.1-py3-none-any.whl.
File metadata
- Download URL: bookdatamaker-0.2.1-py3-none-any.whl
- Upload date:
- Size: 41.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
06657512cd11774efed33956a7dd56e1ca4bc33b44a0b42654ac63235a0372ba
|
|
| MD5 |
d957c21c3697b0f740af5a3ef139cf2b
|
|
| BLAKE2b-256 |
a718cd70e60eb6de2e2fa85a14bad112a799fb51a72307f567c54e2dc575341a
|
Provenance
The following attestation bundles were made for bookdatamaker-0.2.1-py3-none-any.whl:
Publisher:
python-publish.yml on zwh20081/bookdatamaker
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
bookdatamaker-0.2.1-py3-none-any.whl -
Subject digest:
06657512cd11774efed33956a7dd56e1ca4bc33b44a0b42654ac63235a0372ba - Sigstore transparency entry: 699299595
- Sigstore integration time:
-
Permalink:
zwh20081/bookdatamaker@d2dba532fcb088f9e11e614f1ba85bf6578c68f6 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/zwh20081
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@d2dba532fcb088f9e11e614f1ba85bf6578c68f6 -
Trigger Event:
release
-
Statement type: