Skip to main content

Utils for automatic document images processing

Project description

dedoc-utils

This library contains useful utilities for automatic document images processing:

  1. Preprocessing
    • binarization
    • skew correction
  2. Text detection
  3. Line segmentation
  4. Text recognition

Installation

The library requires Tesseract OCR to be installed. To install the library use the following command:

pip install dedoc-utils

It's supposed that you already have torch and torchvision installed. If not you can use the following command for installation:

pip install "dedoc-utils[torch]"

If you cloned the repository, you can install the dependencies via pip:

pip install .

To install torch packages use:

pip install ."[torch]"

Basic usage

Using preprocessors

from dedocutils.preprocessing import AdaptiveBinarizer, SkewCorrector
import cv2
import matplotlib.pyplot as plt

binarizer = AdaptiveBinarizer()
skew_corrector = SkewCorrector()

image = cv2.imread("examples/before_preprocessing.jpg")
binarized_image, _ = binarizer.preprocess(image)
preprocessed_image, _ = skew_corrector.preprocess(binarized_image)

fig = plt.figure(figsize=(10, 7))
rows, columns = 1, 3

fig.add_subplot(rows, columns, 1)
plt.imshow(image)
plt.axis('off')
plt.title("Before preprocessing")
  
fig.add_subplot(rows, columns, 2)
plt.imshow(binarized_image)
plt.axis('off')
plt.title("After binarization")

fig.add_subplot(rows, columns, 3)
plt.imshow(preprocessed_image)
plt.axis('off')
plt.title("After preprocessing")

Using text detectors

from dedocutils.text_detection import DoctrTextDetector

text_detector = DoctrTextDetector()
bboxes = text_detector.detect(preprocessed_image)

for bbox in bboxes[:5]:
    print(bbox)

BBox(x_top_left=2415, y_top_left=3730, width=202, height=97)
BBox(x_top_left=790, y_top_left=3613, width=383, height=105)
BBox(x_top_left=1690, y_top_left=3488, width=407, height=104)
BBox(x_top_left=2171, y_top_left=3488, width=377, height=92)
BBox(x_top_left=885, y_top_left=3505, width=27, height=50)

Using text recognizers

from dedocutils.text_recognition import TesseractTextRecognizer

text_recognizer = TesseractTextRecognizer()

for bbox in bboxes[:10]:
    word_image = preprocessed_image[bbox.y_top_left:bbox.y_bottom_right, bbox.x_top_left:bbox.x_bottom_right]
    text = text_recognizer.recognize(word_image, parameters=dict(language="eng"))
    print(text)

Fie-
afjefjores.
coluntur,
dicuntur
delubro
eodem
dii in
plures

Using line segmenters

In the previous example, the order of the recognized words isn't the same as the order of the words in the document. It happens because of undetermined work of the text detector. In this case, one may use line segmenter to sort bboxes from the text detector.

from dedocutils.line_segmentation import ClusteringLineSegmenter

line_segmenter = ClusteringLineSegmenter()
sorted_bboxes = line_segmenter.segment(bboxes)
for bbox in sorted_bboxes[1]:
    word_image = preprocessed_image[bbox.y_top_left:bbox.y_bottom_right, bbox.x_top_left:bbox.x_bottom_right]
    text = text_recognizer.recognize(word_image, parameters=dict(language="eng"))
    print(text)

gentes,
fimul.
obibant
munera
fumma
facra,

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

dedoc_utils-0.3.8-py3-none-any.whl (79.3 kB view details)

Uploaded Python 3

File details

Details for the file dedoc_utils-0.3.8-py3-none-any.whl.

File metadata

  • Download URL: dedoc_utils-0.3.8-py3-none-any.whl
  • Upload date:
  • Size: 79.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.19

File hashes

Hashes for dedoc_utils-0.3.8-py3-none-any.whl
Algorithm Hash digest
SHA256 f99b2bb8f17d9c262b044a9742d3a8ea50ebc4b1b5ed56deaa17aab6ae790208
MD5 84eab5cb0a398301dccdb41af4087156
BLAKE2b-256 36c8bdc0b488a3220744c3c18de0e1567b01600c3874e8501a1629c4e702e462

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page