Skip to main content

A multi-interface (REST and MCP) server for automatic license plate recognition

Project description

Omni-LPR Logo

Omni-LPR

Tests Code Coverage Code Quality Python Version PyPI Examples License
Docker Image (CPU) Docker Image (OpenVINO) Docker Image (CUDA)

A multi-interface (REST and MCP) server for automatic license plate recognition


Omni-LPR is a self-hostable server that provides automatic license plate recognition (ALPR) capabilities via a REST API and the Model Context Protocol (MCP). It can be used both as a standalone ALPR microservice and as an ALPR toolbox for AI agents and large language models (LLMs).

Why Omni-LPR?

Using Omni-LPR can have the following benefits:

  • Decoupling. Your main application can be in any programming language. It doesn't need to be tangled up with Python or specific ML dependencies because the server handles all of that.

  • Multiple Interfaces. You aren't locked into one way of communicating. You can use a standard REST API from any app, or you can use MCP, which is designed for AI agent integration.

  • Ready-to-Deploy. You don't have to build it from scratch. There are pre-built Docker images that are easy to deploy and start using immediately.

  • Hardware Acceleration. The server is optimized for the hardware you have. It supports generic CPUs (ONNX), Intel CPUs (OpenVINO), and NVIDIA GPUs (CUDA).

  • Asynchronous I/O. It's built on Starlette, which means it has high-performance, non-blocking I/O. It can handle many concurrent requests without getting bogged down.

  • Scalability. Because it's a separate service, it can be scaled independently of your main application. If you suddenly need more ALPR power, you can scale Omni-LPR up without touching anything else.

[!IMPORTANT] Omni-LPR is in early development, so bugs and breaking API changes are expected. Please use the issues page to report bugs or request features.

Quickstart

You can get started with Omni-LPR in a few minutes by following the steps described below.

1. Install the Server

You can install Omni-LPR using pip:

pip install omni-lpr

2. Start the Server

When installed, start the server with a single command:

omni-lpr

By default, the server will be listening on http://127.0.0.1:8000. You can confirm it's running by accessing the health check endpoint:

curl http://127.0.0.1:8000/api/health
# Expected output: {"status": "ok", "version": "0.2.0"}

3. Recognize a License Plate

Now you can make a request to recognize a license plate from an image. The example below uses a publicly available image URL.

curl -X POST \
  -H "Content-Type: application/json" \
  -d '{"path": "https://www.olavsplates.com/foto_n/n_cx11111.jpg"}' \
  http://127.0.0.1:8000/api/v1/tools/detect_and_recognize_plate_from_path/invoke

You should receive a JSON response with the detected license plate information.

Usage

Omni-LPR exposes its capabilities as "tools" that can be called via a REST API or over the MCP.

Core Tools

  • list_models: Lists the available license plate detector and OCR models.
  • recognize_plate: Recognizes text from a pre-cropped image of a license plate.
  • detect_and_recognize_plate: Detects and recognizes all license plates in a full image.

The server can accept an image in three ways: a Base64-encoded string, a local file path or a URL, or as a direct file upload. For more details on how to use the different tool variations, please see the API Documentation.

REST API

The REST API provides a standard way to interact with the server. All tool endpoints are available under the /api/v1 prefix. Once the server is running, you can access interactive API documentation in the Swagger UI at http://127.0.0.1:8000/apidoc/swagger.

MCP Interface

The server also exposes its tools over the MCP for integration with AI agents and LLMs. The MCP endpoint is available at http://127.0.0.1:8000/mcp/sse.

You can use a tool like MCP Inspector to explore the available MCP tools.

MCP Inspector Screenshot

Integration

You can connect any client that supports the MCP protocol to the server. The following examples show how to use the server with LMStudio.

LMStudio Configuration

{
    "mcpServers": {
        "omni-lpr-local": {
            "url": "http://localhost:8000/mcp/sse"
        }
    }
}

Tool Usage Examples

The screenshot of using the list_models tool in LMStudio to list the available models for the APLR.

LMStudio Screenshot 1

The screenshot below shows using the detect_and_recognize_plate_from_path tool in LMStudio to detect and recognize the license plate from an image available on the web.

LMStudio Screenshot 2

Documentation

Omni-LPR's documentation is available here.

Examples

Check out the examples directory for usage examples.

Feature Roadmap

  • Core ALPR Capabilities

    • License plate detection.
    • License plate recognition.
    • Optimized models for CPU, OpenVINO, and CUDA backends.
  • Interfaces and Developer Experience

    • MCP interface for AI agent integration.
    • REST API for all core functions/tools.
    • Standardized JSON error responses.
    • Interactive API documentation (Swagger UI and ReDoc).
    • Support for direct image uploads (multipart/form-data).
  • Performance

    • Asynchronous I/O for concurrent requests.
    • Prometheus metrics endpoint (/api/metrics).
    • Request batching for model inference.
  • Integrations

    • Standalone microservice architecture.
    • MCP and REST API usage examples.
    • A Python client library to simplify interaction with the REST API.
  • Deployment

    • Pre-built Docker images for each hardware backend.
    • Configuration via environment variables and CLI arguments.
    • A Helm chart for Kubernetes deployment.
  • Benchmarks

    • Performance benchmarks for different hardware and request types.

Contributing

Contributions are always welcome! Please see CONTRIBUTING.md for details on how to get started.

License

Omni-LPR is licensed under the MIT License (see LICENSE).

Acknowledgements

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

omni_lpr-0.2.0.tar.gz (19.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

omni_lpr-0.2.0-py3-none-any.whl (18.6 kB view details)

Uploaded Python 3

File details

Details for the file omni_lpr-0.2.0.tar.gz.

File metadata

  • Download URL: omni_lpr-0.2.0.tar.gz
  • Upload date:
  • Size: 19.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.10.18 Linux/6.11.0-1018-azure

File hashes

Hashes for omni_lpr-0.2.0.tar.gz
Algorithm Hash digest
SHA256 3ef66b3ef15145196df3dcd4e050ce52da225a47a38fe47bdcfb72e0cace0203
MD5 d868f3ba65278c9683a7560f129a0335
BLAKE2b-256 7119e183c857b8573e5d2e85d0fe606133cc08eac706985b86a39de71bee37fe

See more details on using hashes here.

File details

Details for the file omni_lpr-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: omni_lpr-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 18.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.4 CPython/3.10.18 Linux/6.11.0-1018-azure

File hashes

Hashes for omni_lpr-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 69362c849f1ab9044f67e79af6fd1e2a55f19211df64124578154ac599f0cb23
MD5 fa964d806667baa9ae4474333241b81d
BLAKE2b-256 81661174d8befad67425b33b2589520ed3f4df0b81b69ef434298da17163ae54

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page