A multiprotocol (REST and MCP) server for automatic license plate recognition
Project description
Omni-LPR is a self-hostable server that provides automatic license plate recognition (APLR) capabilities over REST and the Model Context Protocol (MCP) APIs. It can be both seen as a standalone ALPR microservice or as an ALPR toolbox for AI agents.
Why Omni-LPR?
Instead of integrating complex machine learning (ML) libraries directly into your application, Omni-LPR provides a ready-to-deploy server that offers:
- Decoupling: Run your recognition model as a separate service. Your main application (in any language) doesn't need Python or ML dependencies.
- Multiple Interfaces: Consume the service via a standard REST API, or the MCP for AI agent integration.
- Ready-to-Deploy: Easy to deploy and start with Docker, Gunicorn, and multiple hardware backends support out of the box.
- Scalability: Scale your recognition service independently of your main application.
Features
- Multiple API Interfaces: REST and MCP support.
- High-Performance Recognition: Fast and accurate license place recognition (detection and recognition) using state-of-the-art computer vision models.
- Hardware Acceleration: Support for CPU (ONNX), Intel CPU/VPU (OpenVINO), and NVIDIA GPU (CUDA).
- Easy Deployment: Installable as a Python library or runnable via pre-built Docker images in a container.
- Asynchronous Core: Built on Starlette for high-performance, non-blocking I/O.
Getting Started
You can run Omni-LPR either by installing it as a Python library or by using a pre-built Docker image.
Method 1
You can install Omni-LPR via pip or any other Python package manager.
pip install omni-lpr
By default, the server will use the CPU-enabled ONNX models for both detection and OCR. To use hardware-accelerated models, you need to install the extra dependencies:
- OpenVINO (Intel CPUs):
pip install omni-lpr[openvino] - CUDA (NVIDIA GPUs):
pip install omni-lpr[cuda]
Starting the Server
To start the server with the REST and MCP APIs, set the TRANSPORT environment variable to sse and run the
omni-lpr command:
TRANSPORT=sse omni-lpr --host 0.0.0.0 --port 8000
Method 2
Pre-built Docker images are available from the GitHub Container Registry (ghcr.io).
You can build the images locally or pull them from the registry.
Building the Docker Images
You can build the Docker images for different backends using the provided Makefile:
- CPU (default):
make docker-build-cpu - CUDA:
make docker-build-cuda - OpenVINO:
make docker-build-openvino
Running the Container
Once you have built or pulled the images, you can run them using the following commands:
-
CPU Image (ONNX):
make docker-run-cpu # or manually: docker run --rm -it -p 8000:8000 ghcr.io/habedi/omni-lpr:cpu
-
CPU Image (OpenVINO):
make docker-run-openvino # or manually: docker run --rm -it -p 8000:8000 ghcr.io/habedi/omni-lpr:openvino
-
GPU Image (CUDA):
make docker-run-cuda # or manually: docker run --rm -it --gpus all -p 8000:8000 ghcr.io/habedi/omni-lpr:cuda
API Documentation
Omni-LPR provides two distinct APIs to access its functionality.
[!NOTE] This project does not provide interactive API documentation (e.g., Swagger UI or ReDoc). This
READMEand theGET /api/toolsendpoint are the primary sources of API documentation.
1. REST API
The REST API provides simple endpoints for recognition tasks. All tool endpoints are available under the /api/ prefix.
Discovering Tools
To get a list of available tools and their input schemas, send a GET request to the /api/tools endpoint.
curl http://localhost:8000/api/tools
This will return a JSON array of tool objects, each with a name, description, and input_schema.
Calling a Tool
To call a specific tool, send a POST request to the corresponding endpoint (e.g., /api/detect_and_recognize_plate).
The request body must be a JSON object matching the tool's input_schema.
Example
To recognize a plate, POST a JSON payload with a Base64-encoded image to the /api/detect_and_recognize_plate
endpoint.
# Encode your image to Base64
# On macOS: base64 -i /path/to/your/image.jpg | pbcopy
# On Linux: base64 /path/to/your/image.jpg | xsel -ib
curl -X POST \
-H "Content-Type: application/json" \
-d '{"image_base64": "PASTE_YOUR_BASE64_STRING_HERE"}' \
http://localhost:8000/api/detect_and_recognize_plate
2. MCP API (for AI Agents)
The server also exposes its capabilities as tools over the Model Context Protocol (MCP). The MCP endpoint is available at http://127.0.0.1:8000/mcp/sse.
Implemented Tools
Currently, the following tools are implemented:
recognize_plate: Recognizes text from a pre-cropped image of a license plate.recognize_plate_from_path: Recognizes text from a pre-cropped license plate image located at a given URL or local file path.detect_and_recognize_plate: Detects and recognizes all license plates in a full image.detect_and_recognize_plate_from_path: Detects and recognizes license plates from an image at a given URL or local file path.list_models: Lists the available detector and OCR models.
Configuration
The server is configured using environment variables that can be loaded from a .env file if present.
| Argument | Env Var | Description |
|---|---|---|
--port |
PORT |
Server port (default: 8000) |
--host |
HOST |
Server host (default: 127.0.0.1) |
--transport |
TRANSPORT |
Transport protocol (default: stdio). Valid values are stdio and sse |
--log-level |
LOG_LEVEL |
Logging level (default: INFO). Valid values are DEBUG, INFO, WARN, and ERROR |
--default-ocr-model |
DEFAULT_OCR_MODEL |
Default OCR model to use (default: cct-xs-v1-global-model). Valid values are cct-xs-v1-global-model and cct-s-v1-global-model |
[!NOTE] The REST API is only available when
TRANSPORTis set tosse. The MCP API is available for bothstdio(in-process) andsse(http) transports.
Contributing
See CONTRIBUTING.md for details on how to make a contribution.
License
Omni-LPR is licensed under the MIT License (see LICENSE).
Acknowledgements
- This project uses the awesome fast-plate-ocr and fast-alpr Python libraries.
- The project logo is from SVG Repo.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file omni_lpr-0.1.0.tar.gz.
File metadata
- Download URL: omni_lpr-0.1.0.tar.gz
- Upload date:
- Size: 14.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.4 CPython/3.10.18 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5a2d6746b3ebed2a2f41807799e6a507035e41f5fcf7123365e12128353d5592
|
|
| MD5 |
b29a315252cf08cc6c87d9ac773be8f9
|
|
| BLAKE2b-256 |
7aa5091d1beaca9c3cf7b1df684804726d752500a622518580c87a64c328d194
|
File details
Details for the file omni_lpr-0.1.0-py3-none-any.whl.
File metadata
- Download URL: omni_lpr-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.4 CPython/3.10.18 Linux/6.11.0-1018-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ffb949bcc2106a7f53cdf1ee0a8cc7ac5e598284a0973f13159f22ce59263d6d
|
|
| MD5 |
016d2aeb6ca617f22185004f11663087
|
|
| BLAKE2b-256 |
56195e58cca5e2a40669613d5ea5ed1d6d1b6a5efdf7fc30e1a7dda2a287d047
|