Skip to main content

Python library for document processing

Project description

Inkwell

Quickstart on Colab

Quickstart on Colab

Overview

Inkwell is a modular Python library for extracting information from PDF documents documents with state of the art Vision Language Models. We make use of layout understanding models to improve accuracy of Vision Language models.

Inkwell uses the following models, with more integrations in the work

  • Layout Detection: Faster RCNN, LayoutLMv3, Paddle
  • Table Detection: Table Transformer
  • Table Data Extraction: Phi3.5-Vision, Qwen2 VL 2B, Table Transformer, OpenAI GPT4o Mini
  • OCR: Tesseract, PaddleOCR, Phi3.5-Vision, Qwen2 VL 2B

Installation

pip install py-inkwell

In addition, install detectron2

pip install git+https://github.com/facebookresearch/detectron2.git

Install Tesseract

For Ubuntu -

sudo apt install tesseract-ocr
sudo apt install libtesseract-dev

and, Mac OS

brew install tesseract

For GPUs, install flash attention and vllm for faster inference.

pip install flash-attn --no-build-isolation
pip install vllm

Basic Usage

Parse Pages

from inkwell.pipeline import Pipeline

pipeline = Pipeline()
document = pipeline.process("/path/to/file.pdf")

Extract Page Elements

pages = document.pages

Every Page has the following fragment objects -

  1. Figures
  2. Tables
  3. Text

Figures

Each figure fragment's content has the following attributes -

  1. bbox - The bounding box of the figure
  2. text - The text in the figure, extracted using OCR
  3. image - The cropped image of the figure
figures = page.figure_fragments()

for figure in figures:
    figure_image = figure.content.image 
    figure_bbox = figure.content.bbox 
    figure_text = figure.content.text

Table

Each table fragment's content has the following attributes -

  1. data - The data in the table, extracted using Table Extractor
  2. bbox - The bounding box of the table
  3. image - The image of the table, extracted using OCR
tables = page.table_fragments()

for table in tables:
    table_data = table.content.data
    table_bbox = table.content.bbox
    table_image = table.content.image

Text

Each text fragment's content has the following attributes -

  1. text - The text in the text block
  2. bbox - The bounding box of the text block
  3. image - The image of the text block
text_blocks = page.text_fragments()

for text_block in text_blocks:
    text_block_text = text_block.content.text
    text_block_bbox = text_block.content.bbox
    text_block_image = text_block.content.image

Complete Example

We will take the following PDF and extract text, tables and images from this separtely.

from inkwell.pipeline import Pipeline

pipeline = Pipeline()
document = pipeline.process("/path/to/file.pdf")
pages = document.pages

for page in pages:

    figures = page.figure_fragments()
    tables = page.table_fragments()
    text_blocks = page.text_fragments()

    # Check the content of the image fragments
    for figure in figures:
        figure_image = figure.content.image
        figure_text = figure.content.text
    
    # Check the content of the table fragments
    for table in tables:
        table_image = table.content.image
        table_data = table.content.data

    # Check the content of the text blocks
    for text_block in text_blocks:
        text_block_image = text_block.content.image
        text_block_text = text_block.content.text

Using Qwen2/Phi3.5/OpenAI Vision Models

We have defined a default config class here. You can add vision-language models to the config to use them instead of the default models.

from inkwell.pipeline import DefaultPipelineConfig, Pipeline
from inkwell.ocr import OCRType
from inkwell.table_extractor import TableExtractorType

# using Qwen2 2B Vision OCR anf Table Extractor
config = DefaultPipelineConfig(
    ocr_detector=OCRType.QWEN2_2B_VISION,
    table_extractor=TableExtractorType.QWEN2_2B_VISION
) 

# using Phi3.5 Vision OCR and Table Extractor
config = DefaultPipelineConfig(
    ocr_detector=OCRType.PHI3_VISION,
    table_extractor=TableExtractorType.PHI3_VISION
) 

# using OpenAI GPT4o Mini OCR and Table Extractor (Requires API Key)
config = DefaultPipelineConfig(
    ocr_detector=OCRType.OPENAI_GPT4O_MINI,
    table_extractor=TableExtractorType.OPENAI_GPT4O_MINI
) 

pipeline = Pipeline(config=config)

Advanced Customizations

You can add custom detectors and other components to the pipeline yourself - follow the instructions in the Custom Components notebook

Acknowledgements

We derived inspiration from several open-source libraries in our implementation, like Layout Parser and Deepdoctection. We would like to thank the contributors to these libraries for their work.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_inkwell-0.0.15.tar.gz (19.2 MB view details)

Uploaded Source

Built Distribution

py_inkwell-0.0.15-py3-none-any.whl (19.2 MB view details)

Uploaded Python 3

File details

Details for the file py_inkwell-0.0.15.tar.gz.

File metadata

  • Download URL: py_inkwell-0.0.15.tar.gz
  • Upload date:
  • Size: 19.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.2.0-37-generic

File hashes

Hashes for py_inkwell-0.0.15.tar.gz
Algorithm Hash digest
SHA256 bf7d39f5d251c8f7071342c0bfa62f482106456f577e5100b2318affada32a99
MD5 c8a32fe6f748eb23019bdd7a3f6e1a0d
BLAKE2b-256 9bbd8382c2753abd8ab6cbe21f78d90f960683951175e37b210bd2d628e70fb9

See more details on using hashes here.

File details

Details for the file py_inkwell-0.0.15-py3-none-any.whl.

File metadata

  • Download URL: py_inkwell-0.0.15-py3-none-any.whl
  • Upload date:
  • Size: 19.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.2.0-37-generic

File hashes

Hashes for py_inkwell-0.0.15-py3-none-any.whl
Algorithm Hash digest
SHA256 ece5b006418837ca7391e74c07b1f5dca386336e6ee2c18e673c3cfe3e2f126b
MD5 b22f15e961cf1ea29be8a6f7d65bdccd
BLAKE2b-256 ba97aaac282cc55ba8d84e44e2158a051da271a9bc631b6370e6d4a2e7aa00d2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page