Skip to main content

A packaged and flexible version of the CRAFT text detector and Keras CRNN recognition model.

Project description

image-ocr Documentation Status

NOTE : image-ocr is an updated version of keras-ocr to work with the latest versions of python and tensorflow.

It works exactly the same as keras-ocr, just do pip install image-ocr and replace import image_ocr in your project.

It supports new Google Colaboratory python 3.10 backend

Interactive examples

- Detector Training

- Recognizer Training

- Recognizer Training - Custom set

- Using

Informations

This is a slightly polished and packaged version of the Keras CRNN implementation and the published CRAFT text detection model. It provides a high level API for training a text detection and OCR pipeline.

Please see the documentation for more examples, including for training a custom model.

Getting Started

Installation

image-ocr supports Python >= 3.9

# To install from PyPi
pip install image-ocr

Using

The package ships with an easy-to-use implementation of the CRAFT text detection model from this repository and the CRNN recognition model from this repository.

Try image-ocr on Colab

import matplotlib.pyplot as plt

import image_ocr

# image-ocr will automatically download pretrained
# weights for the detector and recognizer.
pipeline = image_ocr.pipeline.Pipeline()

# Get a set of three example images
images = [
    image_ocr.tools.read(url) for url in [
        'https://upload.wikimedia.org/wikipedia/commons/thumb/4/4b/Kali_Linux_2.0_wordmark.svg/langfr-420px-Kali_Linux_2.0_wordmark.svg.png',
        'https://upload.wikimedia.org/wikipedia/commons/thumb/e/ef/Enseigne_de_pharmacie_lumineuse.jpg/180px-Enseigne_de_pharmacie_lumineuse.jpg',
        'https://upload.wikimedia.org/wikipedia/commons/thumb/a/a6/Boutique_Christian_Lacroix.jpg/330px-Boutique_Christian_Lacroix.jpg',
    ]
]

# Each list of predictions in prediction_groups is a list of
# (word, box) tuples.
prediction_groups = pipeline.recognize(images)

# Plot the predictions
fig, axs = plt.subplots(nrows=len(images), figsize=(20, 20))
for ax, image, predictions in zip(axs, images, prediction_groups):
    image_ocr.tools.drawAnnotations(image=image, predictions=predictions, ax=ax)

example of labeled image

Training

Detector training example : Detector Training Colab

Recognizer training example Recognizer Training Colab

Recognizer training with custom assets Recognizer Training Colab

Comparing image-ocr and other OCR approaches

You may be wondering how the models in this package compare to existing cloud OCR APIs. We provide some metrics below and the notebook used to compute them using the first 1,000 images in the COCO-Text validation set. We limited it to 1,000 because the Google Cloud free tier is for 1,000 calls a month at the time of this writing. As always, caveats apply:

  • No guarantees apply to these numbers -- please beware and compute your own metrics independently to verify them. As of this writing, they should be considered a very rough first draft. Please open an issue if you find a mistake. In particular, the cloud APIs have a variety of options that one can use to improve their performance and the responses can be parsed in different ways. It is possible that I made some error in configuration or parsing. Again, please open an issue if you find a mistake!
  • We ignore punctuation and letter case because the out-of-the-box recognizer in image-ocr (provided by this independent repository) does not support either. Note that both AWS Rekognition and Google Cloud Vision support punctuation as well as upper and lowercase characters.
  • We ignore non-English text.
  • We ignore illegible text.
model latency precision recall
AWS 719ms 0.45 0.48
GCP 388ms 0.53 0.58
image-ocr (scale=2) 417ms 0.53 0.54
image-ocr (scale=3) 699ms 0.5 0.59
  • Precision and recall were computed based on an intersection over union of 50% or higher and a text similarity to ground truth of 50% or higher.
  • keras-ocr latency values were computed using a Tesla P4 GPU on Google Colab. scale refers to the argument provided to image_ocr.pipelines.Pipeline() which determines the upscaling applied to the image prior to inference.
  • Latency for the cloud providers was measured with sequential requests, so you can obtain significant speed improvements by making multiple simultaneous API requests.
  • Each of the entries provides a link to the JSON file containing the annotations made on each pass. You can use this with the notebook to compute metrics without having to make the API calls yourself (though you are encoraged to replicate it independently)!

Why not compare to Tesseract? In every configuration I tried, Tesseract did very poorly on this test. Tesseract performs best on scans of books, not on incidental scene text like that in this dataset.

Advanced Configuration

By default if a GPU is available Tensorflow tries to grab almost all of the available video memory, and this sucks if you're running multiple models with Tensorflow and Pytorch. Setting any value for the environment variable MEMORY_GROWTH will force Tensorflow to dynamically allocate only as much GPU memory as is needed.

You can also specify a limit per Tensorflow process by setting the environment variable MEMORY_ALLOCATED to any float, and this value is a float ratio of VRAM to the total amount present.

To apply these changes, call image_ocr.config.configure() at the top of your file where you import image_ocr.

Contributing

To work on the project, start by doing the following. These instructions probably do not yet work for Windows but if a Windows user has some ideas for how to fix that it would be greatly appreciated (I don't have a Windows machine to test on at the moment).

# Install local dependencies for
# code completion, etc.
make init

# Build the Docker container to run
# tests and such.
make build
  • You can get a JupyterLab server running to experiment with using make lab.
  • To run checks before committing code, you can use make format-check type-check lint-check test.
  • To view the documentation, use make docs.

To implement new features, please first file an issue proposing your change for discussion.

To report problems, please file an issue with sample code, expected results, actual results, and a complete traceback.

Troubleshooting

  • This package is installing opencv-python-headless but I would prefer a different opencv flavor. This is due to aleju/imgaug#473. You can uninstall the unwanted OpenCV flavor after installing image-ocr. We apologize for the inconvenience.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

image-ocr-0.0.4.tar.gz (43.0 kB view details)

Uploaded Source

Built Distribution

image_ocr-0.0.4-py3-none-any.whl (42.8 kB view details)

Uploaded Python 3

File details

Details for the file image-ocr-0.0.4.tar.gz.

File metadata

  • Download URL: image-ocr-0.0.4.tar.gz
  • Upload date:
  • Size: 43.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.12 CPython/3.10.6 Linux/5.19.0-43-generic

File hashes

Hashes for image-ocr-0.0.4.tar.gz
Algorithm Hash digest
SHA256 15ddeddd4e7300c8f7845591b1a2674ed0779318da8703f6aa65335bb28e0834
MD5 ced728783efa3a910691e89c8dc21ac4
BLAKE2b-256 ebcdae8259ae9fdbc364ee8f0983736a4dd81f17e758771e09d86a7ab86a4d5d

See more details on using hashes here.

File details

Details for the file image_ocr-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: image_ocr-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 42.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.12 CPython/3.10.6 Linux/5.19.0-43-generic

File hashes

Hashes for image_ocr-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 f4aa710f8a94392364f0449b12d7b401df3bc4a4a4b84663b64db3c354c39f0a
MD5 12ba95dc762497bacac2b945d8a4e678
BLAKE2b-256 29edd05f9c8b5cee7cda00e60765a85d2b77da4c7c5f818a7fa0478d39a28d7c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page