Skip to main content

Texteller is a tool for converting rendered image to original latex code

Project description

📄 English | 中文

𝚃𝚎𝚡𝚃𝚎𝚕𝚕𝚎𝚛

https://github.com/OleehyO/TexTeller/assets/56267907/532d1471-a72e-4960-9677-ec6c19db289f

TexTeller is an end-to-end formula recognition model, capable of converting images into corresponding LaTeX formulas.

TexTeller was trained with 80M image-formula pairs (previous dataset can be obtained here), compared to LaTeX-OCR which used a 100K dataset, TexTeller has stronger generalization abilities and higher accuracy, covering most use cases.

[!NOTE] If you would like to provide feedback or suggestions for this project, feel free to start a discussion in the Discussions section.


🔖 Table of Contents

Images that can be recognized by TexTeller

🔄 Change Log

  • 📮[2024-06-06] TexTeller3.0 released! The training data has been increased to 80M (10x more than TexTeller2.0 and also improved in data diversity). TexTeller3.0's new features:

    • Support scanned image, handwritten formulas, English(Chinese) mixed formulas.

    • OCR abilities in both Chinese and English for printed images.

  • 📮[2024-05-02] Support paragraph recognition.

  • 📮[2024-04-12] Formula detection model released!

  • 📮[2024-03-25] TexTeller2.0 released! The training data for TexTeller2.0 has been increased to 7.5M (15x more than TexTeller1.0 and also improved in data quality). The trained TexTeller2.0 demonstrated superior performance in the test set, especially in recognizing rare symbols, complex multi-line formulas, and matrices.

    Here are more test images and a horizontal comparison of various recognition models.

🚀 Getting Started

  1. Install the project's dependencies:

    pip install texteller
    
  2. If your are using CUDA backend, you may need to install onnxruntime-gpu:

    pip install texteller[onnxruntime-gpu]
    
  3. Run the following command to start inference:

    texteller inference "/path/to/image.{jpg,png}"
    

    See texteller inference --help for more details

🌐 Web Demo

Run the following command:

texteller web

Enter http://localhost:8501 in a browser to view the web demo.

[!NOTE] Paragraph recognition cannot restore the structure of a document, it can only recognize its content.

🖥️ Server

We use ray serve to provide an API server for TexTeller. To start the server, run the following command:

texteller launch
Parameter Description
-ckpt The path to the weights file,default is TexTeller's pretrained weights.
-tknz The path to the tokenizer,default is TexTeller's tokenizer.
-p The server's service port,default is 8000.
--num-replicas The number of service replicas to run on the server,default is 1 replica. You can use more replicas to achieve greater throughput.
--ncpu-per-replica The number of CPU cores used per service replica,default is 1.
--ngpu-per-replica The number of GPUs used per service replica,default is 1. You can set this value between 0 and 1 to run multiple service replicas on one GPU to share the GPU, thereby improving GPU utilization. (Note, if --num_replicas is 2, --ngpu_per_replica is 0.7, then 2 GPUs must be available)
--num-beams The number of beams for beam search,default is 1.
--use-onnx Perform inference using Onnx Runtime, disabled by default

To send requests to the server:

# client_demo.py

import requests

server_url = "http://127.0.0.1:8000/predict"

img_path = "/path/to/your/image"
with open(img_path, 'rb') as img:
    files = {'img': img}
    response = requests.post(server_url, files=files)

print(response.text)

🐍 Python API

We provide several easy-to-use Python APIs for formula OCR scenarios. Please refer to our documentation to learn about the corresponding API interfaces and usage.

🔍 Formula Detection

TexTeller's formula detection model is trained on 3,415 images of Chinese materials and 8,272 images from the IBEM dataset.

We provide a formula detection interface in the Python API. Please refer to our API documentation for more details.

🏋️‍♂️ Training

Please setup your environment before training:

  1. Install the dependencies for training:

    pip install texteller[train]
    
  2. Clone the repository:

    git clone https://github.com/OleehyO/TexTeller.git
    

Dataset

We provide an example dataset in the examples/train_texteller/dataset/train directory, you can place your own training data according to the format of the example dataset.

Training the Model

In the examples/train_texteller/ directory, run the following command:

accelerate launch train.py

Training arguments can be adjusted in train_config.yaml.

📅 Plans

  • Train the model with a larger dataset
  • Recognition of scanned images
  • Support for English and Chinese scenarios
  • Handwritten formulas support
  • PDF document recognition
  • Inference acceleration

⭐️ Stargazers over time

Stargazers over time

👥 Contributors

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

texteller-1.0.1.tar.gz (21.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

texteller-1.0.1-py3-none-any.whl (82.9 kB view details)

Uploaded Python 3

File details

Details for the file texteller-1.0.1.tar.gz.

File metadata

  • Download URL: texteller-1.0.1.tar.gz
  • Upload date:
  • Size: 21.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for texteller-1.0.1.tar.gz
Algorithm Hash digest
SHA256 f7744aece26069c5474f845ff6e9a2eac5b2517138672a80400bc00b3df94ebd
MD5 8c56dca5efb08cc6ac3f650d0694d2f8
BLAKE2b-256 9c82cd296833ae27aaddf61c8f7f177d9f80f5d79b4866f05491dd0a3d2fa357

See more details on using hashes here.

File details

Details for the file texteller-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: texteller-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 82.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.14

File hashes

Hashes for texteller-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7743c41a43beefe9af72a380f92e289cb7bddd66cb51a26f11e95f1fdda2a503
MD5 ffd18391a9883ea42474b5011129d12c
BLAKE2b-256 6aa10c8467e3cb0ec51ae4794276b7f75e963f402cf5fd948826215eb81edb80

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page