Skip to main content

Repository for Document AI

Project description

Deep Doctection Logo

A Document AI Package

deepdoctection is a Python library that orchestrates document extraction and document layout analysis tasks using deep learning models. It does not implement models but enables you to build pipelines using highly acknowledged libraries for object detection, OCR and selected NLP tasks and provides an integrated framework for fine-tuning, evaluating and running models. For more specific text processing tasks use one of the many other great NLP libraries.

deepdoctection focuses on applications and is made for those who want to solve real world problems related to document extraction from PDFs or scans in various image formats.

Check the demo of a document layout analysis pipeline with OCR on :hugs: Hugging Face spaces.

Overview

deepdoctection provides model wrappers of supported libraries for various tasks to be integrated into pipelines. Its core function does not depend on any specific deep learning library. Selected models for the following tasks are currently supported:

  • Document layout analysis including table recognition in Tensorflow with Tensorpack, or PyTorch with Detectron2,
  • OCR with support of Tesseract, DocTr (Tensorflow and PyTorch implementations available) and a wrapper to an API for a commercial solution,
  • Text mining for native PDFs with pdfplumber,
  • Language detection with fastText,
  • Deskewing and rotating images with jdeskew.
  • Document and token classification with all LayoutLM models provided by the Transformer library. (Yes, you can use any LayoutLM-model with any of the provided OCR-or pdfplumber tools straight away!).
  • Table detection and table structure recognition with table-transformer.
  • There is a small dataset for token classification available and a lot of new tutorials to show, how to train and evaluate this dataset using LayoutLMv1, LayoutLMv2, LayoutXLM and LayoutLMv3.
  • Comprehensive configuration of analyzer like choosing different models, output parsing, OCR selection. Check this notebook or the docs for more infos.
  • Document layout analysis and table recognition now runs with Torchscript (CPU) as well and Detectron2 is not required anymore for basic inference.
  • More angle predictors for determining the rotation of a document based on Tesseract and DocTr (not contained in the built-in Analyzer).
  • Token classification with LiLT via transformers. We have added a model wrapper for token classification with LiLT and added a some LiLT models to the model catalog that seem to look promising, especially if you want to train a model on non-english data. The training script for LayoutLM can be used for LiLT as well and we will be providing a notebook on how to train a model on a custom dataset soon.

deepdoctection provides on top of that methods for pre-processing inputs to models like cropping or resizing and to post-process results, like validating duplicate outputs, relating words to detected layout segments or ordering words into contiguous text. You will get an output in JSON format that you can customize even further by yourself.

Have a look at the introduction notebook in the notebook repo for an easy start.

Check the release notes for recent updates.

Models

deepdoctection or its support libraries provide pre-trained models that are in most of the cases available at the Hugging Face Model Hub or that will be automatically downloaded once requested. For instance, you can find pre-trained object detection models from the Tensorpack or Detectron2 framework for coarse layout analysis, table cell detection and table recognition.

Datasets and training scripts

Training is a substantial part to get pipelines ready on some specific domain, let it be document layout analysis, document classification or NER. deepdoctection provides training scripts for models that are based on trainers developed from the library that hosts the model code. Moreover, deepdoctection hosts code to some well established datasets like Publaynet that makes it easy to experiment. It also contains mappings from widely used data formats like COCO and it has a dataset framework (akin to datasets so that setting up training on a custom dataset becomes very easy. This notebook shows you how to do this.

Evaluation

deepdoctection comes equipped with a framework that allows you to evaluate predictions of a single or multiple models in a pipeline against some ground truth. Check again here how it is done.

Inference

Having set up a pipeline it takes you a few lines of code to instantiate the pipeline and after a for loop all pages will be processed through the pipeline.

import deepdoctection as dd
from IPython.core.display import HTML
from matplotlib import pyplot as plt

analyzer = dd.get_dd_analyzer()  # instantiate the built-in analyzer similar to the Hugging Face space demo

df = analyzer.analyze(path = "/path/to/your/doc.pdf")  # setting up pipeline
df.reset_state()                 # Trigger some initialization

doc = iter(df)
page = next(doc) 

image = page.viz()
plt.figure(figsize = (25,17))
plt.axis('off')
plt.imshow(image)

text

HTML(page.tables[0].html)

table

print(page.text)

table

Documentation

There is an extensive documentation available containing tutorials, design concepts and the API. We want to present things as comprehensively and understandably as possible. However, we are aware that there are still many areas where significant improvements can be made in terms of clarity, grammar and correctness. We look forward to every hint and comment that increases the quality of the documentation.

Requirements

requirements

Everything in the overview listed below the deepdoctection layer are necessary requirements and have to be installed separately.

  • Linux or macOS. (Windows is not supported but there is a Dockerfile available)

  • Python >= 3.9

  • 1.13 <= PyTorch or 2.11 <= Tensorflow < 2.16. (For lower Tensorflow versions the code will only run on a GPU). In general, if you want to train or fine-tune models, a GPU is required.

  • With respect to the Deep Learning framework, you must decide between Tensorflow and PyTorch.

  • Tesseract OCR engine will be used through a Python wrapper. The core engine has to be installed separately.

  • For release v.0.34.0 and below deepdoctection uses Python wrappers for Poppler to convert PDF documents into images. For release v.0.35.0 this dependency will be optional.

The following overview shows the availability of the models in conjunction with the DL framework.

Task PyTorch Torchscript Tensorflow
Layout detection via Detectron2/Tensorpack ✅ (CPU only) ✅ (GPU only)
Table recognition via Detectron2/Tensorpack ✅ (CPU only) ✅ (GPU only)
Table transformer via Transformers
DocTr
LayoutLM (v1, v2, v3, XLM) via Transformers

Installation

We recommend using a virtual environment. You can install the package via pip or from source.

Install with pip from PyPi

Minimal installation

If you want to get started with a minimal setting (e.g. running the deepdoctection analyzer with default configuration or trying the 'Get started notebook'), install deepdoctection with

pip install deepdoctection

If you want to use the Tensorflow framework, please install Tensorpack separately. Detectron2 will not be installed and layout models/ table recognition models will run with Torchscript on a CPU.

Full installation

The following installation will give you ALL models available within the Deep Learning framework as well as all models that are independent of Tensorflow/PyTorch. Please note, that the dependencies are very complex. We try hard to keep the requirements up to date though.

For Tensorflow, run

pip install deepdoctection[tf]

For PyTorch,

first install Detectron2 separately as it is not distributed via PyPi. Check the instruction here. Then run

pip install deepdoctection[pt]

This will install deepdoctection with all dependencies listed above the deepdoctection layer. Use this setting, if you want to get started or want to explore all features.

If you want to have more control with your installation and are looking for fewer dependencies then install deepdoctection with the basic setup only.

pip install deepdoctection

This will ignore all model libraries (layers above the deepdoctection layer in the diagram) and you will be responsible to install them by yourself. Note, that you will not be able to run any pipeline with this setup.

For further information, please consult the full installation instructions.

Installation from source

Download the repository or clone via

git clone https://github.com/deepdoctection/deepdoctection.git

To get started with Tensorflow, run:

cd deepdoctection
pip install ".[tf]"

Installing the full PyTorch setup from source will also install Detectron2 for you:

cd deepdoctection
pip install ".[source-pt]"

Running a Docker container from Docker hub

Starting from release v.0.27.0, pre-existing Docker images can be downloaded from the Docker hub.

docker pull deepdoctection/deepdoctection:<release_tag> 

To start the container, you can use the Docker compose file ./docker/pytorch-gpu/docker-compose.yaml. In the .env file provided, specify the host directory where deepdoctection's cache should be stored. This directory will be mounted. Additionally, specify a working directory to mount files to be processed into the container.

docker compose up -d

will start the container.

Credits

We thank all libraries that provide high quality code and pre-trained models. Without, it would have been impossible to develop this framework.

Problems

We try hard to eliminate bugs. We also know that the code is not free of issues. We welcome all issues relevant to this repo and try to address them as quickly as possible. Bug fixes or enhancements will be deployed in a new release every 10 to 12 weeks.

If you like deepdoctection ...

...you can easily support the project by making it more visible. Leaving a star or a recommendation will help.

License

Distributed under the Apache 2.0 License. Check LICENSE for additional information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepdoctection-0.35.tar.gz (331.7 kB view details)

Uploaded Source

Built Distribution

deepdoctection-0.35-py3-none-any.whl (445.7 kB view details)

Uploaded Python 3

File details

Details for the file deepdoctection-0.35.tar.gz.

File metadata

  • Download URL: deepdoctection-0.35.tar.gz
  • Upload date:
  • Size: 331.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for deepdoctection-0.35.tar.gz
Algorithm Hash digest
SHA256 4407274d90c9e2f1ac3319d7292f9c9e41718e94a24f6506b00406933f17a0ef
MD5 c4f8396a160f8280c7677a74c3e0ad07
BLAKE2b-256 2ec9fa4b1e8d631ff2aa8dfd2668cc4a08d78ffd27a6aaa994681d193fe3fd45

See more details on using hashes here.

File details

Details for the file deepdoctection-0.35-py3-none-any.whl.

File metadata

File hashes

Hashes for deepdoctection-0.35-py3-none-any.whl
Algorithm Hash digest
SHA256 2ee4718199e32ae4eadf6bb6a287f35d8b0ac6842e3c9a9daa50c6ec4ed03e03
MD5 80f0271178fae71c811aee63bb8aeb0a
BLAKE2b-256 29345d11f7489c2481c1b0c09eff60fc994e5bb25a631cd64197f93fdf4bfc28

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page