Structured text extraction framework for digital and scanned PDFs with inline formatting preservation
Project description
Paradox
Structured text extraction framework for digital and scanned PDFs with inline formatting preservation.
Paradox is a dual-pipeline framework that extracts semantically typed, hierarchically structured content from PDF documents. It detects whether each page is digital or scanned and routes it to the optimal extraction strategy, producing a single unified JSON output regardless of source quality.
Key Features
- Dual pipeline: heuristic (digital PDFs) + vision (scanned/photographed), routed per page
- 68+ element types: headings, paragraphs, tables, lists, amendments, signatures, metadata, and more
- 6 inline marks: bold, italic, ++underline++,
strikethrough, ^superscript^,monospace - Table extraction with merged cell detection (colspan/rowspan) via vector analysis
- Hierarchical JSON output with traceable page refs
(pX,lY):(pX,lY) - Ensemble table detection: Density-based clustering + grid detection model + image processing, scored and arbitrated
- Configurable via 30+ parameters (dataclass + environment variable overrides)
Install
pip install -r requirements.txt
Usage
python scripts/convert.py document.pdf
That's it. Output goes to output/document.json.
Options
| Flag | Description |
|---|---|
-o PATH |
Custom output path (file or directory) |
-w N |
Parallel workers (default: 20) |
--pages 1-5 |
Extract specific pages only |
--force-vision |
Force vision pipeline on all pages |
--force-heuristic |
Force heuristic pipeline on all pages |
--compare PAGE |
Generate visual QA PNGs for table on page |
--no-images |
Skip embedded image extraction |
--compact |
Compact JSON (no indentation) |
python scripts/convert.py contract.pdf -o result.json # Custom output
python scripts/convert.py docs/ -o extracted/ -w 8 # Batch folder, 8 workers
python scripts/convert.py scan.pdf --force-vision # Force OCR pipeline
python scripts/convert.py contract.pdf --compare all # Visual QA for all tables
How It Works
Digital Pipeline — Real Example
Extracted output (abbreviated):
{
"elements": [
{"type": "TITLE", "marks": ["BOLD"], "text": "**Annual Report — Q4 2025**"},
{"type": "PARAGRAPH", "text": "Revenue increased by **12.3%**...driven by *international expansion*..."},
{"type": "H1", "marks": ["BOLD"], "text": "**1. Financial Summary**",
"children": [
{"type": "TABLE", "shape": [5, 4], "cells": [
{"p": [0,0], "t": "Category"}, {"p": [0,1], "t": "Q3 2025"}, ...
]}
]},
{"type": "H1", "marks": ["BOLD"], "text": "**2. Notes**",
"children": [
{"type": "PARAGRAPH", "text": "All figures reported in USD. See *Appendix A*."}
]}
]
}
Vision Pipeline — Real Example
The key insight: digital PDFs contain rich font metadata (bold flags, font names, vector drawings) that enable near-perfect extraction. Scanned PDFs have none of this -- they are just images. Rather than forcing one approach on both, Paradox routes each page independently to the optimal pipeline, then produces an identical output format. A single document can have digital contract pages interleaved with scanned signed exhibits, and every page gets the best available extraction.
Digital Path (Heuristic)
The heuristic pipeline extracts text spans with full font metadata. Each span carries flags for bold, italic, superscript, and monospace. Underline and strikethrough are detected geometrically by finding horizontal vector lines drawn across text baselines -- this is necessary because most PDF producers draw these as separate line objects rather than setting a font flag.
Block classification uses a priority ladder with 10 levels, progressing from document metadata through titles and headings down to body content. Font size, weight, and position on the page all contribute to the classification decision.
Speed: ~250 pages/second. No GPU required.
Vision Path
The vision pipeline renders each page to an image and processes it through three stages:
- DocLayout-YOLO detects page regions (title, text, table, figure, caption, etc.)
- RapidOCR (optimized runtime) extracts word-level text from each detected region
- TexTAR (Vision Transformer, ICDAR 2025) classifies inline marks per word: bold, italic, underline, and strikethrough
Strikethrough detection uses an image processing fallback because the model's accuracy on strikethrough alone is poor. Superscript is detected via a bounding-box height heuristic (word height below 60% of the line median). Monospace cannot be detected from images and is unavailable in the vision path.
Speed: ~12 pages/second (GPU), ~0.5 pages/second (CPU-only).
Table Extraction Strategy
Scanned table regions detected by YOLO are processed through two parallel structure-extraction strategies:
- Density-based clustering (
cluster_cells.py): a purely geometric approach that clusters OCR bounding boxes into rows and columns. Warp-invariant and effective on borderless or loosely formatted tables. - grid detection model + image processing (
table_vision.py): a semantic approach that detects row and column separators. Better for tables with clear borders.
A scoring function (_score_struct()) evaluates both results and picks the winner based on cell fill rate, a merge penalty (linear: merge_ratio * 1.5), and word-count compatibility. Source priority provides tie-breaking: vectorial (+0.3) > vision table (+0.1) > density clustering (0.0).
When both the vectorial pipeline (for digital content) and the vision pipeline detect the same table, a deduplication step (_dedupe_tables()) groups candidates by bounding-box IoU >= 0.5 and keeps only the highest-scoring result per group.
Output Format
Both pipelines produce identical JSON. The output is a hierarchical tree where headings contain their children (paragraphs, tables, lists), enabling semantic navigation of the document.
{
"source": "contract.pdf", // Source filename
"total_pages": 228, // Document stats
"total_elements": 544,
"type_summary": { // Element count by type
"PARAGRAPH": 200,
"H1": 50,
"TABLE": 12
},
// HIERARCHICAL TREE — headings nest their children by depth
// TITLE > SUBTITLE > H1 > H2 > H3 > H4
// Each heading "owns" everything until the next heading of equal or higher level
"elements": [
{
"type": "H1", // Element type (68+ types available)
"marks": ["BOLD"], // Detected formatting: BOLD, ITALIC, UNDERLINE,
// STRIKETHROUGH, SUPERSCRIPT, MONOSPACE
"text": "**ARTICLE 13. MINIMUM COMPENSATION**",
// Text with inline markers:
// **bold** *italic* ++underline++
// ~~strike~~ ^super^ `mono`
"ref": "(p91,l1):(p95,l12)", // Traceable location in source PDF
// (page 91, element 1) to (page 95, element 12)
// CHILDREN — everything under this H1 until the next H1
"children": [
{
"type": "TABLE",
"marks": [],
"shape": [14, 7], // Table dimensions: 14 rows × 7 columns
"cells": [
// Merged cells: "p": [row, col, rowspan, colspan]
{"p": [0, 0, 2, 2], "t": "HIGH BUDGET"}, // spans 2 rows × 2 cols
{"p": [0, 2, 1, 5], "t": "EFFECTIVE"}, // spans 1 row × 5 cols
// Normal cells: "p": [row, col]
{"p": [2, 1], "t": "Screenplay, including treatment"},
{"p": [2, 2], "t": "$126,089"}
]
},
{
// Strikethrough — deleted text preserved with ~~markers~~
"type": "PARAGRAPH",
"marks": ["STRIKETHROUGH"],
"text": "~~The term of this Agreement shall be for a period commencing July 1, 2017~~",
"ref": "(p91,l5):(p91,l5)"
}
]
}
]
}
Ref format
"ref": "(p1,l3):(p6,l2)"
| | | |
| | | +-- element 2 on that page
| | +----- page 6
| +----------- element 3 on that page
+-------------- page 1
Both p (page) and l (element index) are 1-based.
Table cell format
"p": [r, c]-- normal cell at rowr, columnc"p": [r, c, rs, cs]-- merged cell spanningrsrows andcscolumns"t"-- cell text, with\nfor internal line breaks
Inline markers
| Mark | Syntax | Example |
|---|---|---|
| BOLD | **text** |
**ARTICLE 1** |
| ITALIC | *text* |
*See exhibit* |
| BOLD+ITALIC | ***text*** |
***IMPORTANT*** |
| STRIKETHROUGH | ~~text~~ |
~~Deleted~~ |
| UNDERLINE | ++text++ |
++Underlined++ |
| SUPERSCRIPT | ^text^ |
^1^ |
| MONOSPACE | `text` |
`DocID-123` |
Configuration
All pipeline thresholds are centralized in pdf_tagger/config.py as a PipelineConfig dataclass. Override any value via environment variables with the PDF_ prefix:
PDF_RENDER_DPI=300 python scripts/convert.py input.pdf
PDF_CV_BORDER_MISSING_THRESHOLD=0.25 python scripts/convert.py input.pdf
| Parameter | Default | Purpose |
|---|---|---|
PDF_SCAN_TEXT_THRESHOLD |
50 | Characters below which a page is routed to the vision pipeline |
PDF_RENDER_DPI |
200 | DPI for rendering pages to images (higher = better OCR, slower) |
PDF_CV_BORDER_MISSING_THRESHOLD |
0.35 | Border coverage ratio below which a cell boundary is considered missing (triggers merge) |
PDF_YOLO_CONFIDENCE |
0.2 | Minimum confidence for YOLO region detections |
PDF_OCR_MIN_CONFIDENCE |
0.3 | Minimum confidence for OCR word results |
PDF_HDBSCAN_MIN_FILL_RATE |
0.40 | Minimum cell fill rate to accept an density clustering table structure |
PDF_DEDUP_IOU_THRESHOLD |
0.5 | IoU threshold for suppressing duplicate table detections |
Performance
Benchmark results (April 2026):
| Suite | Shape Match | Cell Accuracy | Text Similarity |
|---|---|---|---|
| Digital via Vision | 100% | 88.9% | 100% |
| Real Scanned | 80% | 67.8% | 86.6% |
| Stress Tests | 100% | 82.0% | 100% |
Shape match measures whether the extracted grid dimensions are correct. Cell accuracy measures exact content match per cell. Text similarity is a fuzzy comparison of all extracted text against ground truth.
Architecture
paradox/
│
├── scripts/
│ └── convert.py CLI entry point — single PDF or batch folder
│
├── models/
│ └── model weights TexTAR Vision Transformer weights (64 MB, ICDAR 2025)
│
├── pdf_tagger/ Core extraction pipeline
│ │
│ ├── tagger_json.py Main orchestrator
│ │ Routes pages (digital/scanned), runs post-processing,
│ │ deduplicates tables, builds section tree, assigns refs
│ │
│ ├── config.py PipelineConfig dataclass — 30+ tunable thresholds
│ ├── scan_detector.py Per-page routing: < 50 chars text → vision path
│ ├── catalog.py 68+ element types (TITLE, H1, TABLE, PARAGRAPH, ...)
│ │
│ │ ── Digital Path ──
│ ├── font_classifier.py Font extraction → block classification
│ │ Priority ladder: 10 levels, metadata → titles → body
│ ├── tagger.py Geometric detection of strikethrough + underline
│ │ Finds real vector lines drawn across text baselines
│ │
│ │ ── Vision Path ──
│ ├── vision_layout.py Layout detection + OCR
│ │ Ensemble scoring: density clustering vs TATR for tables
│ ├── marks_vision.py Per-word mark classification
│ │ T1: BOLD/ITALIC · T2: UNDERLINE/STRIKETHROUGH
│ │ + geometric fallback + superscript heuristic
│ ├── table_vision.py Grid detection model + geometric grid detection
│ │ Merged cell detection via coverage analysis
│ ├── cluster_cells.py density clustering density clustering for borderless tables
│ │ Row/column inference from OCR word positions
│ │
│ │ ── Shared ──
│ ├── camscanner.py Deskew + perspective correction for photos
│ ├── table_compare.py isual QA: side-by-side table PNG comparison
│ └── mark_classifier/ Vision mark classifier architecture
│
├── pdf_grid/ Low-level table geometry
│ ├── line_tables.py ── Vector lines → grid → merged cells (digital PDFs)
│ ├── cv_tables.py ── Morphological analysis → border-missing detection
│ ├── borderless_tables.py ── Column alignment → borderless table detection
│ ├── geometry.py ── Clustering, coverage, geometric primitives
│ ├── extractor.py ── Grid extraction coordinator
│ ├── text_layout.py ── Text-to-cell assignment
│ └── types.py ── Shared type definitions (Word, BBox)
│
├── docs/ Documentation
│ ├── architecture.md Deep dive: why dual pipeline, design decisions
│ ├── why-this-approach.md Tradeoffs: vs LLMs, vs Docling/MinerU/Marker
│ ├── configuration.md All 30+ parameters with types and defaults
│ ├── api-reference.md Python API, JSON schema, CLI reference
│ └── research/ 12 research documents (YAML status headers)
│
└── examples/ Sample PDFs + expected JSON outputs
For a detailed architectural walkthrough, see docs/architecture.md. For design decisions and tradeoffs, see docs/why-this-approach.md.
Marks Coverage
| Mark | Syntax | Digital | Vision | Method (Digital) | Method (Vision) |
|---|---|---|---|---|---|
| Bold | **text** |
✅ | ✅ | Font metadata | Vision model |
| Italic | *text* |
✅ | ✅ | Font metadata | Vision model |
| Underline | ++text++ |
✅ | ✅ | Geometric detection | Vision model |
| Strikethrough | ~~text~~ |
✅ | ✅ | Geometric detection | Vision model + geometric fallback |
| Superscript | ^text^ |
✅ | ✅ | Font metadata | Bounding-box height heuristic |
| Monospace | `text` |
✅ | ❌ | Font metadata heuristic | Not available |
Inline marks within table cells are not yet extracted in either pipeline.
Limitations
- MONOSPACE: only detected in digital PDFs via font name heuristic. No visual equivalent exists for scanned pages because monospace fonts are not reliably distinguishable from proportional fonts at typical scan resolutions.
- Multi-column layouts: reading order may interleave columns on scanned pages. The heuristic pipeline benefits from native text flow analysis; the vision pipeline relies on YOLO region order, which can fail on complex layouts.
- Complex scanned tables: tables with more than ~8 rows and complex multi-level headers may lose 1-2 rows due to Density-based clustering sensitivity or YOLO region boundary errors.
- Table cell marks: inline formatting (bold, italic, etc.) within individual table cells is not extracted in either pipeline.
- GPU recommendation: the vision pipeline runs on CPU but is approximately 24x slower. A CUDA-capable GPU is strongly recommended for batch processing of scanned documents.
Development
Testing suites and benchmarks live in the _dev/ directory:
python scripts/run_tests.py # All suites combined
python scripts/test_vision_vs_heuristic.py # Vision vs heuristic on digital PDFs
python scripts/test_vision_full.py # 3 suites: digital, scanned, stress
python scripts/test_complex_tables.py # 10 complex tables with merged cells
python scripts/bench_distortion.py # Robustness under geometric distortions
See _dev/README.md for detailed testing instructions and ground-truth format.
License
This project is proprietary. All rights reserved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file paradox_pdf-0.1.3.tar.gz.
File metadata
- Download URL: paradox_pdf-0.1.3.tar.gz
- Upload date:
- Size: 119.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b6981e7486cb047a15fc849f46d2f253f78b785cee1599fe0bcee5db06983e07
|
|
| MD5 |
42aa4a0ebf6a761620cd63dc0f4dd045
|
|
| BLAKE2b-256 |
c20fc080aefa96000f2be50cfec8cd647ae4928023e8b5aea704de6ab56ff351
|
File details
Details for the file paradox_pdf-0.1.3-py3-none-any.whl.
File metadata
- Download URL: paradox_pdf-0.1.3-py3-none-any.whl
- Upload date:
- Size: 136.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c641503a4cc634d70b018923229f8787936d80720f1bf8bea87c0bfb71843579
|
|
| MD5 |
eee2a61e005cf6a5cdc58c0369dc4056
|
|
| BLAKE2b-256 |
d8e733f0c2a2386e2b97b2aa3c47f7876469e0d6d8f63638edde867217659b27
|