Document Text Recognition (docTR): deep Learning for high-performance OCR on documents.
Project description
Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch
What you can expect from this repository:
- efficient ways to parse textual information (localize and identify each word) from your documents
- guidance on how to integrate this in your current architecture
Quick Tour
Getting your pretrained model
End-to-End OCR is achieved in docTR using a two-stage approach: text detection (localizing words), then text recognition (identify all characters in the word). As such, you can select the architecture used for text detection, and the one for text recognition from the list of available implementations.
from doctr.models import ocr_predictor
model = ocr_predictor(det_arch='db_resnet50', reco_arch='crnn_vgg16_bn', pretrained=True)
Reading files
Documents can be interpreted from PDF or images:
from doctr.io import DocumentFile
# PDF
pdf_doc = DocumentFile.from_pdf("path/to/your/doc.pdf")
# Image
single_img_doc = DocumentFile.from_images("path/to/your/img.jpg")
# Webpage (requires `weasyprint` to be installed)
webpage_doc = DocumentFile.from_url("https://www.yoursite.com")
# Multiple page images
multi_img_doc = DocumentFile.from_images(["path/to/page1.jpg", "path/to/page2.jpg"])
Putting it together
Let's use the default pretrained model for an example:
from doctr.io import DocumentFile
from doctr.models import ocr_predictor
model = ocr_predictor(pretrained=True)
# PDF
doc = DocumentFile.from_pdf("path/to/your/doc.pdf")
# Analyze
result = model(doc)
Dealing with rotated documents
Should you use docTR on documents that include rotated pages, or pages with multiple box orientations, you have multiple options to handle it:
-
If you only use straight document pages with straight words (horizontal, same reading direction), consider passing
assume_straight_boxes=True
to the ocr_predictor. It will directly fit straight boxes on your page and return straight boxes, which makes it the fastest option. -
If you want the predictor to output straight boxes (no matter the orientation of your pages, the final localizations will be converted to straight boxes), you need to pass
export_as_straight_boxes=True
in the predictor. Otherwise, ifassume_straight_pages=False
, it will return rotated bounding boxes (potentially with an angle of 0°).
If both options are set to False, the predictor will always fit and return rotated boxes.
To interpret your model's predictions, you can visualize them interactively as follows:
# Display the result (requires matplotlib & mplcursors to be installed)
result.show()
Or even rebuild the original document from its predictions:
import matplotlib.pyplot as plt
synthetic_pages = result.synthesize()
plt.imshow(synthetic_pages[0]); plt.axis('off'); plt.show()
The ocr_predictor
returns a Document
object with a nested structure (with Page
, Block
, Line
, Word
, Artefact
).
To get a better understanding of our document model, check our documentation:
You can also export them as a nested dict, more appropriate for JSON format:
json_output = result.export()
Use the KIE predictor
The KIE predictor is a more flexible predictor compared to OCR as your detection model can detect multiple classes in a document. For example, you can have a detection model to detect just dates and addresses in a document.
The KIE predictor makes it possible to use detector with multiple classes with a recognition model and to have the whole pipeline already setup for you.
from doctr.io import DocumentFile
from doctr.models import kie_predictor
# Model
model = kie_predictor(det_arch='db_resnet50', reco_arch='crnn_vgg16_bn', pretrained=True)
# PDF
doc = DocumentFile.from_pdf("path/to/your/doc.pdf")
# Analyze
result = model(doc)
predictions = result.pages[0].predictions
for class_name in predictions.keys():
list_predictions = predictions[class_name]
for prediction in list_predictions:
print(f"Prediction for {class_name}: {prediction}")
The KIE predictor results per page are in a dictionary format with each key representing a class name and it's value are the predictions for that class.
If you are looking for support from the Mindee team
Installation
Prerequisites
Python 3.9 (or higher) and pip are required to install docTR.
Latest release
You can then install the latest release of the package using pypi as follows:
pip install python-doctr
:warning: Please note that the basic installation is not standalone, as it does not provide a deep learning framework, which is required for the package to run.
We try to keep framework-specific dependencies to a minimum. You can install framework-specific builds as follows:
# for TensorFlow
pip install "python-doctr[tf]"
# for PyTorch
pip install "python-doctr[torch]"
# optional dependencies for visualization, html, and contrib modules can be installed as follows:
pip install "python-doctr[torch,viz,html,contib]"
For MacBooks with M1 chip, you will need some additional packages or specific versions:
- TensorFlow 2: metal plugin
- PyTorch: version >= 2.0.0
Developer mode
Alternatively, you can install it from source, which will require you to install Git. First clone the project repository:
git clone https://github.com/mindee/doctr.git
pip install -e doctr/.
Again, if you prefer to avoid the risk of missing dependencies, you can install the TensorFlow or the PyTorch build:
# for TensorFlow
pip install -e doctr/.[tf]
# for PyTorch
pip install -e doctr/.[torch]
Models architectures
Credits where it's due: this repository is implementing, among others, architectures from published research papers.
Text Detection
- DBNet: Real-time Scene Text Detection with Differentiable Binarization.
- LinkNet: LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation
- FAST: FAST: Faster Arbitrarily-Shaped Text Detector with Minimalist Kernel Representation
Text Recognition
- CRNN: An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition.
- SAR: Show, Attend and Read:A Simple and Strong Baseline for Irregular Text Recognition.
- MASTER: MASTER: Multi-Aspect Non-local Network for Scene Text Recognition.
- ViTSTR: Vision Transformer for Fast and Efficient Scene Text Recognition.
- PARSeq: Scene Text Recognition with Permuted Autoregressive Sequence Models.
More goodies
Documentation
The full package documentation is available here for detailed specifications.
Demo app
A minimal demo app is provided for you to play with our end-to-end OCR models!
Live demo
Courtesy of :hugs: Hugging Face :hugs:, docTR has now a fully deployed version available on Spaces! Check it out
Running it locally
If you prefer to use it locally, there is an extra dependency (Streamlit) that is required.
Tensorflow version
pip install -r demo/tf-requirements.txt
Then run your app in your default browser with:
USE_TF=1 streamlit run demo/app.py
PyTorch version
pip install -r demo/pt-requirements.txt
Then run your app in your default browser with:
USE_TORCH=1 streamlit run demo/app.py
TensorFlow.js
Instead of having your demo actually running Python, you would prefer to run everything in your web browser? Check out our TensorFlow.js demo to get started!
Docker container
We offer Docker container support for easy testing and deployment.
Using GPU with docTR Docker Images
The docTR Docker images are GPU-ready and based on CUDA 11.8
.
However, to use GPU support with these Docker images, please ensure that Docker is configured to use your GPU.
To verify and configure GPU support for Docker, please follow the instructions provided in the NVIDIA Container Toolkit Installation Guide.
Once Docker is configured to use GPUs, you can run docTR Docker containers with GPU support:
docker run -it --gpus all ghcr.io/mindee/doctr:tf-py3.8.18-gpu-2023-09 bash
Available Tags
The Docker images for docTR follow a specific tag nomenclature: <framework>-py<python_version>-<system>-<doctr_version|YYYY-MM>
. Here's a breakdown of the tag structure:
<framework>
:tf
(TensorFlow) ortorch
(PyTorch).<python_version>
:3.8.18
,3.9.18
, or3.10.13
.<system>
:cpu
orgpu
<doctr_version>
: a tag >=v0.7.1
<YYYY-MM>
: e.g.2023-09
Here are examples of different image tags:
Tag | Description |
---|---|
tf-py3.8.18-cpu-v0.7.1 |
TensorFlow version 3.8.18 with docTR v0.7.1 . |
torch-py3.9.18-gpu-2023-09 |
PyTorch version 3.9.18 with GPU support and a monthly build from 2023-09 . |
Building Docker Images Locally
You can also build docTR Docker images locally on your computer.
docker build -t doctr .
You can specify custom Python versions and docTR versions using build arguments. For example, to build a docTR image with TensorFlow, Python version 3.9.10
, and docTR version v0.7.0
, run the following command:
docker build -t doctr --build-arg FRAMEWORK=tf --build-arg PYTHON_VERSION=3.9.10 --build-arg DOCTR_VERSION=v0.7.0 .
Example script
An example script is provided for a simple documentation analysis of a PDF or image file:
python scripts/analyze.py path/to/your/doc.pdf
All script arguments can be checked using python scripts/analyze.py --help
Minimal API integration
Looking to integrate docTR into your API? Here is a template to get you started with a fully working API using the wonderful FastAPI framework.
Deploy your API locally
Specific dependencies are required to run the API template, which you can install as follows:
cd api/
pip install poetry
make lock
pip install -r requirements.txt
You can now run your API locally:
uvicorn --reload --workers 1 --host 0.0.0.0 --port=8002 --app-dir api/ app.main:app
Alternatively, you can run the same server on a docker container if you prefer using:
PORT=8002 docker-compose up -d --build
What you have deployed
Your API should now be running locally on your port 8002. Access your automatically-built documentation at http://localhost:8002/redoc and enjoy your three functional routes ("/detection", "/recognition", "/ocr", "/kie"). Here is an example with Python to send a request to the OCR route:
import requests
params = {"det_arch": "db_resnet50", "reco_arch": "crnn_vgg16_bn"}
with open('/path/to/your/doc.jpg', 'rb') as f:
files = [ # application/pdf, image/jpeg, image/png supported
("files", ("doc.jpg", f.read(), "image/jpeg")),
]
print(requests.post("http://localhost:8080/ocr", params=params, files=files).json())
Example notebooks
Looking for more illustrations of docTR features? You might want to check the Jupyter notebooks designed to give you a broader overview.
Citation
If you wish to cite this project, feel free to use this BibTeX reference:
@misc{doctr2021,
title={docTR: Document Text Recognition},
author={Mindee},
year={2021},
publisher = {GitHub},
howpublished = {\url{https://github.com/mindee/doctr}}
}
Contributing
If you scrolled down to this section, you most likely appreciate open source. Do you feel like extending the range of our supported characters? Or perhaps submitting a paper implementation? Or contributing in any other way?
You're in luck, we compiled a short guide (cf. CONTRIBUTING
) for you to easily do so!
License
Distributed under the Apache 2.0 License. See LICENSE
for more information.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file python_doctr-0.10.0.tar.gz
.
File metadata
- Download URL: python_doctr-0.10.0.tar.gz
- Upload date:
- Size: 190.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 21a234c22b04c6a2f5c8302f2667e168de50606069c974174675842c0a2ab28e |
|
MD5 | 8f5f6b5661106ab7dd7c841d5a0511bb |
|
BLAKE2b-256 | be01d0b59d3300f7a9d3ecbc75bdec03b0bcd536f443dc06218fd88043b1c8e8 |
File details
Details for the file python_doctr-0.10.0-py3-none-any.whl
.
File metadata
- Download URL: python_doctr-0.10.0-py3-none-any.whl
- Upload date:
- Size: 304.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 777c221a4142b8ce5daf6fc1714b10fe8f7fe72026f09ea4d2b978c448b97ae8 |
|
MD5 | e9b360a21959c77c23d6df0a8f86a447 |
|
BLAKE2b-256 | cc7f3e685bbd5271f92de46b6346bdd96cc41ef0a97cff6fc2883ebb7f28407f |