Skip to main content

A lightweight OCR library for Khmer and English documents

Project description

Kiri OCR 📄

PyPI version License Python Versions Downloads Hugging Face Model Hugging Face Spaces

Kiri OCR is a lightweight, OCR library for English and Khmer documents. It provides document-level text detection, recognition, and rendering capabilities in a compact package.

🚀 Try the Live Demo

Kiri OCR

✨ Key Features

  • Lightweight: Compact model optimized for speed and efficiency.
  • Bi-lingual: Native support for English and Khmer (and mixed).
  • Document Processing: Automatic text line and word detection.
  • Easy to Use: Simple Python API.

📊 Dataset

The model is trained on the mrrtmob/khmer_english_ocr_image_line dataset, which contains 12 million synthetic images of Khmer and English text lines.

📈 Benchmark

Results on synthetic test images (10 popular fonts):

Benchmark Graph

Benchmark Table

📦 Installation

Install easily via pip:

pip install kiri-ocr

Or install from source:

git clone https://github.com/mrrtmob/kiri-ocr.git
cd kiri-ocr
pip install .

💻 Usage

CLI Tool (Inference)

Run OCR on an image and save results:

kiri-ocr predict path/to/document.jpg --output results/

(Or simply kiri-ocr path/to/document.jpg)

Python API

from kiri_ocr import OCR

# Initialize (loads from Hugging Face automatically)
ocr = OCR()

# Extract text
text, results = ocr.extract_text('document.jpg')
print(text)

🎓 Training a New Model

Follow this guide to train a custom model from scratch.

Step 1: Generate Training Data

Create synthetic training images from a text file.

  1. Prepare text file: Create data/textlines.txt with your training text (one sentence per line).

  2. Generate dataset:

    kiri-ocr generate \
        --train-file data/textlines.txt \
        --output data \
        --fonts-dir fonts \
        --augment 1 \
        --random-augment
    
    • --fonts-dir: Directory containing .ttf files (Khmer/English fonts).
    • --augment: How many variations to generate per line (e.g., 2).
    • --random-augment: Apply random noise/rotation even if augment is 1.

Custom Dataset Structure

If you have your own data (not generated), organize it as follows:

data/
  ├── train/
  │   ├── labels.txt       # Tab-separated: filename <tab> text
  │   └── images/          # Image files
  │       ├── img_001.png
  │       ├── img_002.jpg
  │       └── ...
  └── val/
      ├── labels.txt
      └── images/

Format of labels.txt:

img_001.png    Hello World
img_002.jpg    This is a test

Note: Images must be in an images/ subdirectory relative to the labels.txt file.

Step 2: Train the Model

You can train using CLI arguments or a configuration file.

Option A: Using Configuration File (Recommended)

  1. Generate default config:
    kiri-ocr init-config -o config.json
    
  2. Edit config.json to adjust hyperparameters (epochs, batch size, etc.).
  3. Start training:
    kiri-ocr train --config config.json
    

Option B: Using CLI Arguments

kiri-ocr train \
    --train-labels data/train/labels.txt \
    --val-labels data/val/labels.txt \
    --epochs 100 \
    --batch-size 32 \
    --device cuda

Option C: Training with Hugging Face Dataset

You can train directly using a dataset from Hugging Face Hub. The dataset should contain image and text columns.

kiri-ocr train \
    --hf-dataset mrrtmob/km_en_image_line \
    --epochs 50 \
    --batch-size 32

Advanced HF Options:

  • --hf-train-split: Specify training split name (default: "train").
  • --hf-val-split: Specify validation split name. If not provided, it tries "validation", "val", "test", or automatically splits the training set.
  • --hf-val-percent: Percentage of training data to use for validation if no validation split is found (default: 0.1 for 10%).
  • --hf-image-col: Column name for images (default: "image").
  • --hf-text-col: Column name for text labels (default: "text").
  • --hf-subset: Dataset configuration/subset name (optional).
  • --hf-streaming: Stream the dataset instead of downloading it fully.

To use a specific subset/config (if the dataset has multiple):

kiri-ocr train \
    --hf-dataset mrrtmob/km_en_image_line \
    --hf-subset default \
    ...

Fine-Tuning

To fine-tune an existing model on new data:

kiri-ocr train \
    --config config.yaml \
    --from-model models/model.kiri

This loads the weights from models/model.kiri before starting training. Useful for domain adaptation or adding languages.

The trained model will be saved to models/model.kiri (or specified output_dir).

☕ Support

If you find this project useful, you can support me here:

⚖️ License

Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kiri_ocr-0.1.6.tar.gz (35.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kiri_ocr-0.1.6-py3-none-any.whl (36.1 kB view details)

Uploaded Python 3

File details

Details for the file kiri_ocr-0.1.6.tar.gz.

File metadata

  • Download URL: kiri_ocr-0.1.6.tar.gz
  • Upload date:
  • Size: 35.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kiri_ocr-0.1.6.tar.gz
Algorithm Hash digest
SHA256 c88a17a385ab51cf28bad51010f3c255dfdf40238923f6fc405bb7eeb49fb662
MD5 2ae24b4c7ca5e7a0d220b29800ac617c
BLAKE2b-256 5a7a7b183264121036581806f3bbf5f803e55e9488ff018bd6934262b33e8c53

See more details on using hashes here.

File details

Details for the file kiri_ocr-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: kiri_ocr-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 36.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kiri_ocr-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 4c5ffbe5e1c1062f00105d287bb3739bd284f7c1c2b8b1840eb8ec997e307c6b
MD5 09311be3e156dbaf8bd864effbc3f31f
BLAKE2b-256 055e1d9e57be4f4a8a51c91cc80ea70209399973d8357b13877831d52adba8af

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page