Skip to main content

Convert raw documents into AI-understandable context with intelligent text extraction, table detection, and semantic chunking

Project description

xgen-doc2chunk

xgen-doc2chunk is a document processing library that converts raw documents into AI-understandable context. It analyzes, restructures, and normalizes content so that language models can reason over documents with higher accuracy and consistency.

Features

  • Multi-format Support: Process a wide variety of document formats including:

    • PDF (with table detection, OCR fallback, and complex layout handling)
    • Microsoft Office: DOCX, DOC, PPTX, PPT, XLSX, XLS
    • Korean documents: HWP, HWPX (Hangul Word Processor)
    • Text formats: TXT, MD, RTF, CSV, HTML
    • Code files: Python, JavaScript, TypeScript, and 20+ languages
  • Intelligent Text Extraction:

    • Preserves document structure (headings, paragraphs, lists)
    • Extracts tables as HTML with proper rowspan/colspan handling
    • Handles merged cells and complex table layouts
    • Extracts and processes inline images
  • OCR Integration:

    • Pluggable OCR engine architecture
    • Supports OpenAI, Anthropic, Google Gemini, and vLLM backends
    • Automatic OCR fallback for scanned documents or image-based PDFs
  • Smart Chunking:

    • Semantic text chunking with configurable size and overlap
    • Table-aware chunking that preserves table integrity
    • Protected regions for code blocks and special content
  • Metadata Extraction:

    • Extracts document metadata (title, author, creation date, etc.)
    • Formats metadata in a structured, parseable format

Installation

pip install xgen-doc2chunk

Or using uv:

uv add xgen-doc2chunk

Quick Start

Basic Usage

from xgen_doc2chunk import DocumentProcessor

# Create processor instance
processor = DocumentProcessor()

# Extract text from a document
text = processor.extract_text("document.pdf")
print(text)

# Extract text and chunk in one step
result = processor.extract_chunks(
    "document.pdf",
    chunk_size=1000,
    chunk_overlap=200
)

# Access chunks
for i, chunk in enumerate(result.chunks):
    print(f"Chunk {i + 1}: {chunk[:100]}...")

# Save chunks to markdown file
result.save_to_md("output/chunks.md")

With OCR Processing

from xgen_doc2chunk import DocumentProcessor
from xgen_doc2chunk.ocr.ocr_engine.openai_ocr import OpenAIOCREngine

# Initialize OCR engine
ocr_engine = OpenAIOCREngine(api_key="sk-...", model="gpt-4o")

# Create processor with OCR
processor = DocumentProcessor(ocr_engine=ocr_engine)

# Extract text with OCR processing enabled
text = processor.extract_text(
    "scanned_document.pdf",
    ocr_processing=True
)

Supported Formats

Category Extensions
Documents .pdf, .docx, .doc, .pptx, .ppt, .hwp, .hwpx
Spreadsheets .xlsx, .xls, .csv, .tsv
Text .txt, .md, .rtf
Web .html, .htm, .xml
Code .py, .js, .ts, .java, .cpp, .c, .go, .rs, and more
Config .json, .yaml, .yml, .toml, .ini, .env

Architecture

libs/
├── core/
│   ├── document_processor.py    # Main entry point
│   ├── processor/               # Format-specific handlers
│   │   ├── pdf_handler.py       # PDF processing with V4 engine
│   │   ├── docx_handler.py      # DOCX processing
│   │   ├── ppt_handler.py       # PowerPoint processing
│   │   ├── excel_handler.py     # Excel processing
│   │   ├── hwp_processor.py     # HWP 5.0 OLE processing
│   │   ├── hwpx_processor.py    # HWPX (ZIP/XML) processing
│   │   └── ...
│   └── functions/
│       └── img_processor.py     # Image handling utilities
├── chunking/
│   ├── chunking.py              # Main chunking interface
│   ├── text_chunker.py          # Text-based chunking
│   ├── table_chunker.py         # Table-aware chunking
│   └── page_chunker.py          # Page-based chunking
└── ocr/
    ├── base.py                  # OCR base class
    ├── ocr_processor.py         # OCR processing utilities
    └── ocr_engine/              # OCR engine implementations
        ├── openai_ocr.py
        ├── anthropic_ocr.py
        ├── gemini_ocr.py
        └── vllm_ocr.py

Requirements

  • Python 3.12+
  • Required dependencies are automatically installed (see pyproject.toml)

System Dependencies

For full functionality, you may need:

  • Tesseract OCR: For local OCR fallback
  • LibreOffice: For DOC/RTF conversion (optional)
  • Poppler: For PDF image extraction

Configuration

# Custom configuration
config = {
    "pdf": {
        "extract_images": True,
        "ocr_fallback": True,
    },
    "chunking": {
        "default_size": 1000,
        "default_overlap": 200,
    }
}

processor = DocumentProcessor(config=config)

License

Apache License 2.0 - see LICENSE for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xgen_doc2chunk-0.1.3.tar.gz (292.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

xgen_doc2chunk-0.1.3-py3-none-any.whl (392.5 kB view details)

Uploaded Python 3

File details

Details for the file xgen_doc2chunk-0.1.3.tar.gz.

File metadata

  • Download URL: xgen_doc2chunk-0.1.3.tar.gz
  • Upload date:
  • Size: 292.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for xgen_doc2chunk-0.1.3.tar.gz
Algorithm Hash digest
SHA256 9d5c28362420c0af967875642cdf2203081740f9c0502c340f97e659bcd7c7c3
MD5 a5d71b39b95a2b9dfcef0bbaf745e153
BLAKE2b-256 af7d823e34caa252cabb52283c298d3f7caad12e837d4d93dc8af7a95f49c416

See more details on using hashes here.

Provenance

The following attestation bundles were made for xgen_doc2chunk-0.1.3.tar.gz:

Publisher: publish.yml on master0419/xgen_doc2chunk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file xgen_doc2chunk-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: xgen_doc2chunk-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 392.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for xgen_doc2chunk-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a18451d9fad177186b54fa4300f2eb138dfb86d57026ba686f93e7d624e5d0e0
MD5 031f7c1361d76ad05ff40ee04660aed0
BLAKE2b-256 d4e5deaffc76376c2844d10194bfe3d56865b0563640ae803a86d8dbc0a60a72

See more details on using hashes here.

Provenance

The following attestation bundles were made for xgen_doc2chunk-0.1.3-py3-none-any.whl:

Publisher: publish.yml on master0419/xgen_doc2chunk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page