Skip to main content

OCR project using LLMs

Project description

vllmocr

PyPI version

vllmocr is a command-line tool that performs Optical Character Recognition (OCR) on images and PDFs using Large Language Models (LLMs). It supports multiple LLM providers, including OpenAI, Anthropic, Google, and local models via Ollama.

Features

  • Image and PDF OCR: Extracts text from both images (PNG, JPG, JPEG) and PDF files.
  • Multiple LLM Providers: Supports a variety of LLMs:
    • OpenAI: GPT-4o
    • Anthropic: Claude 3 Haiku, Claude 3 Sonnet
    • Google: Gemini 1.5 Pro
    • Ollama: (Local models) Llama3, MiniCPM, and other models supported by Ollama.
  • Configurable: Settings, including the LLM provider and model, can be adjusted via a configuration file or environment variables.
  • Image Preprocessing: Includes optional image rotation for improved OCR accuracy.

Installation

It is recommended to install vllmocr using uv:

uv pip install vllmocr

If you don't have uv installed, you can install it with:

pipx install uv

You may need to restart your shell session for uv to be available.

Alternatively, you can use pip:

pip install vllmocr

Usage

The vllmocr command-line tool has two main subcommands: image and pdf.

1. Process a Single Image:

vllmocr image <image_path> [options]
  • <image_path>: The path to the image file (PNG, JPG, JPEG).

Options:

  • --provider: The LLM provider to use (openai, anthropic, google, ollama). Defaults to openai.
  • --model: The specific model to use (e.g., gpt-4o, haiku, gemini-1.5-pro-002, llama3). Defaults to the provider's default model.
  • --config: Path to a TOML configuration file.
  • --help: Show the help message and exit.

Example:

vllmocr image my_image.jpg --provider anthropic --model haiku

2. Process a PDF:

vllmocr pdf <pdf_path> [options]
  • <pdf_path>: The path to the PDF file.

Options: (Same as image subcommand)

Example:

vllmocr pdf my_document.pdf --provider openai --model gpt-4o

Configuration

vllmocr can be configured using a TOML file or environment variables. The configuration file is searched for in the following locations (in order of precedence):

  1. A path specified with the --config command-line option.
  2. ./config.toml (current working directory)
  3. ~/.config/vllmocr/config.toml (user's home directory)
  4. /etc/vllmocr/config.toml (system-wide)

config.toml (Example):

[llm]
provider = "anthropic"  # Default provider
model = "haiku"        # Default model for the provider

[image_processing]
rotation = 0           # Image rotation in degrees (optional)

[api_keys]
openai = "YOUR_OPENAI_API_KEY"
anthropic = "YOUR_ANTHROPIC_API_KEY"
google = "YOUR_GOOGLE_API_KEY"
# Ollama doesn't require an API key

Environment Variables:

You can also set API keys using environment variables:

  • VLLM_OCR_OPENAI_API_KEY
  • VLLM_OCR_ANTHROPIC_API_KEY
  • VLLM_OCR_GOOGLE_API_KEY

Environment variables override settings in the configuration file. This is the recommended way to set API keys for security reasons.

Development

To set up a development environment:

  1. Clone the repository:

    git clone https://github.com/<your-username>/vllmocr.git
    cd vllmocr
    
  2. Create and activate a virtual environment (using uv):

    uv venv
    uv pip install -e .[dev]
    

    This installs the package in editable mode (-e) along with development dependencies (like pytest and pytest-mock).

  3. Run tests:

    uv pip install pytest pytest-mock  # if not already installed as dev dependencies
    pytest
    

License

This project is licensed under the MIT License (see pyproject.toml for details).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllmocr-0.3.6.tar.gz (14.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vllmocr-0.3.6-py3-none-any.whl (13.5 kB view details)

Uploaded Python 3

File details

Details for the file vllmocr-0.3.6.tar.gz.

File metadata

  • Download URL: vllmocr-0.3.6.tar.gz
  • Upload date:
  • Size: 14.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.13

File hashes

Hashes for vllmocr-0.3.6.tar.gz
Algorithm Hash digest
SHA256 cb2ce0239ef4abdcd7207365b9e952f7f80caf4153e51165aeea8213e61a4bff
MD5 ad8ee7cf6f19ae9a5a4d30778216cb19
BLAKE2b-256 86011a06771ad3c36e943fa3fd05f76523e4368a7137f65df6f5b666812bf873

See more details on using hashes here.

File details

Details for the file vllmocr-0.3.6-py3-none-any.whl.

File metadata

  • Download URL: vllmocr-0.3.6-py3-none-any.whl
  • Upload date:
  • Size: 13.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.13

File hashes

Hashes for vllmocr-0.3.6-py3-none-any.whl
Algorithm Hash digest
SHA256 a01592c4f7f0ba900ceba2fabe8c0a068e0b85e4d8c0a9a6544e92f13fec5615
MD5 bed10203a65ec779b88d4db19c3c3292
BLAKE2b-256 1b972e293322a9fcaff8edb1a906abdcc3dd5367f7c3e6a1bde5e167e26768fa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page