Skip to main content

A lightweight OCR library for Khmer and English documents

Project description

Kiri OCR 📄

PyPI version License Python Versions Downloads Hugging Face Model Hugging Face Spaces

Kiri OCR is a lightweight, OCR library for English and Khmer documents. It provides document-level text detection, recognition, and rendering capabilities in a compact package.

🚀 Try the Live Demo

Kiri OCR

✨ Key Features

  • Lightweight: Compact model optimized for speed and efficiency.
  • Bi-lingual: Native support for English and Khmer (and mixed).
  • Document Processing: Automatic text line and word detection.
  • Easy to Use: Simple Python API.

📊 Dataset

The model is trained on the mrrtmob/khmer_english_ocr_image_line dataset, which contains 12 million synthetic images of Khmer and English text lines.

📈 Benchmark

Results on synthetic test images (10 popular fonts):

Benchmark Graph

Benchmark Table

📦 Installation

Install easily via pip:

pip install kiri-ocr

Or install from source:

git clone https://github.com/mrrtmob/kiri-ocr.git
cd kiri-ocr
pip install .

💻 Usage

CLI Tool (Inference)

Run OCR on an image and save results:

kiri-ocr predict path/to/document.jpg --output results/

(Or simply kiri-ocr path/to/document.jpg)

Python API

from kiri_ocr import OCR

# Initialize (loads from Hugging Face automatically)
ocr = OCR()

# Extract text
text, results = ocr.extract_text('document.jpg')
print(text)

🎓 Training a New Model

Follow this guide to train a custom model from scratch.

Step 1: Generate Training Data

Create synthetic training images from a text file.

  1. Prepare text file: Create data/textlines.txt with your training text (one sentence per line).

  2. Generate dataset:

    kiri-ocr generate \
        --train-file data/textlines.txt \
        --output data \
        --fonts-dir fonts \
        --augment 1 \
        --random-augment
    
    • --fonts-dir: Directory containing .ttf files (Khmer/English fonts).
    • --augment: How many variations to generate per line (e.g., 2).
    • --random-augment: Apply random noise/rotation even if augment is 1.

Custom Dataset Structure

If you have your own data (not generated), organize it as follows:

data/
  ├── train/
  │   ├── labels.txt       # Tab-separated: filename <tab> text
  │   └── images/          # Image files
  │       ├── img_001.png
  │       ├── img_002.jpg
  │       └── ...
  └── val/
      ├── labels.txt
      └── images/

Format of labels.txt:

img_001.png    Hello World
img_002.jpg    This is a test

Note: Images must be in an images/ subdirectory relative to the labels.txt file.

Step 2: Train the Model

You can train using CLI arguments or a configuration file.

Option A: Using Configuration File (Recommended)

  1. Generate default config:
    kiri-ocr init-config -o config.json
    
  2. Edit config.json to adjust hyperparameters (epochs, batch size, etc.).
  3. Start training:
    kiri-ocr train --config config.json
    

Option B: Using CLI Arguments

kiri-ocr train \
    --train-labels data/train/labels.txt \
    --val-labels data/val/labels.txt \
    --epochs 100 \
    --batch-size 32 \
    --device cuda

Option C: Training with Hugging Face Dataset

You can train directly using a dataset from Hugging Face Hub. The dataset should contain image and text columns.

kiri-ocr train \
    --hf-dataset mrrtmob/km_en_image_line \
    --epochs 50 \
    --batch-size 32

Advanced HF Options:

  • --hf-train-split: Specify training split name (default: "train").
  • --hf-val-split: Specify validation split name. If not provided, it tries "validation", "val", "test", or automatically splits the training set.
  • --hf-val-percent: Percentage of training data to use for validation if no validation split is found (default: 0.1 for 10%).
  • --hf-image-col: Column name for images (default: "image").
  • --hf-text-col: Column name for text labels (default: "text").
  • --hf-subset: Dataset configuration/subset name (optional).
  • --hf-streaming: Stream the dataset instead of downloading it fully.

To use a specific subset/config (if the dataset has multiple):

kiri-ocr train \
    --hf-dataset mrrtmob/km_en_image_line \
    --hf-subset default \
    ...

Fine-Tuning

To fine-tune an existing model on new data:

kiri-ocr train \
    --config config.yaml \
    --from-model models/model.kiri

This loads the weights from models/model.kiri before starting training. Useful for domain adaptation or adding languages.

The trained model will be saved to models/model.kiri (or specified output_dir).

☕ Support

If you find this project useful, you can support me here:

⚖️ License

Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kiri_ocr-0.1.5.tar.gz (35.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kiri_ocr-0.1.5-py3-none-any.whl (36.0 kB view details)

Uploaded Python 3

File details

Details for the file kiri_ocr-0.1.5.tar.gz.

File metadata

  • Download URL: kiri_ocr-0.1.5.tar.gz
  • Upload date:
  • Size: 35.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kiri_ocr-0.1.5.tar.gz
Algorithm Hash digest
SHA256 50bbf8af4dfc8134093a6e218015f74b00e9e68dc7cd9dcbc956006c4199803f
MD5 49abaead0733bd32eeb1575a00937d63
BLAKE2b-256 874d4d187821d3639baf973ac6b865527a34d0600e0d4b2c5f67e8448a1d4d63

See more details on using hashes here.

File details

Details for the file kiri_ocr-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: kiri_ocr-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 36.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for kiri_ocr-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 d9f1ad6b2dc113cbb64bf23f8821f8af97d236deb81410c0b78982968ceba28d
MD5 9621f51ecf5f53e6664fdc94c60d0d1a
BLAKE2b-256 91ba3800585ac4567930412c6249a8e0eff18ff94b7272db181b4dffe2d7e340

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page