Skip to main content

Fast PaddleOCR MCP server - Extract text from images using PaddleOCR with optimized performance

Project description

PaddleOCR-MCP

PaddleOCR MCP (Model Context Protocol) server and CLI tool that extracts text from images and outputs results in markdown format. Optimized for fast inference with GPU auto-detection.

Installation

Using uvx (Recommended - No Installation Needed)

Run directly using uvx:

# Run MCP server
uvx --from fast-paddleocr-mcp paddleocr-mcp

# Run CLI tool
uvx --from fast-paddleocr-mcp paddleocr-md <image_path> [-o output.md]

Or Install from PyPI

pip install fast-paddleocr-mcp
paddleocr-mcp  # MCP server
paddleocr-md <image_path>  # CLI tool

MCP Server Configuration

MCP Tool: ocr_image

The server provides a single tool called ocr_image that:

  • Input: image_path (string) - Path to the input image file
  • Output: Returns the path to the generated markdown file containing OCR results

Integration with MCP Clients

To use this server with an MCP client (like Cursor, Claude Desktop, etc.), configure it in your MCP settings:

Using uvx from PyPI (recommended):

{
  "mcpServers": {
    "paddleocr": {
      "command": "uvx",
      "args": ["--from", "fast-paddleocr-mcp", "paddleocr-mcp"]
    }
  }
}

MCP Request/Response Example

Request:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "ocr_image",
    "arguments": {
      "image_path": "test_image.png"
    }
  }
}

Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "test_image.png.md"
      }
    ]
  }
}

Usage

Basic Usage

The tool is optimized for speed by default with the following settings:

  • Fast mode enabled (disables preprocessing for maximum speed)
  • PP-OCRv4 (faster mobile models)
  • 640px image size limit (faster processing)
  • Auto GPU detection (uses GPU if available, falls back to CPU)
# Output will be saved as <image_name>.png.md
# Uses: fast mode + PP-OCRv4 + 640px + auto GPU detection
uvx --from fast-paddleocr-mcp paddleocr-md image.png

# Specify custom output path
uvx --from fast-paddleocr-mcp paddleocr-md image.png -o result.md

# Force CPU mode
uvx --from fast-paddleocr-mcp paddleocr-md image.png --cpu

# Disable fast mode for better accuracy on rotated text
uvx --from fast-paddleocr-mcp paddleocr-md image.png --no-fast

# Use PP-OCRv5 for better accuracy (slower)
uvx --from fast-paddleocr-mcp paddleocr-md image.png --ocr-version PP-OCRv5

Default Optimization Settings

The tool is optimized for speed by default with these settings:

  • Fast mode enabled: Disables textline orientation classification (skips one model)
  • PP-OCRv4: Uses faster mobile models (PP-OCRv4_mobile_det, PP-OCRv4_mobile_rec)
  • 640px image size limit: Faster processing (vs default 960px)
  • Auto GPU detection: Automatically uses GPU if available, falls back to CPU
  • Document preprocessing disabled: Skips unnecessary preprocessing steps

Customization Options

  1. **--no-fast**: Disable fast mode for better accuracy
  • Enables textline orientation classification
  • Better accuracy on rotated text, but slower
  1. **--cpu**: Force CPU mode
  • Overrides auto GPU detection
  • Explicitly use CPU
  1. **--gpu**: Force GPU mode
  • Will fail if GPU not available
  • Use when you want to ensure GPU usage
  1. **--ocr-version PP-OCRv5**: Use better accuracy version
  • PP-OCRv5 has better accuracy but slower than PP-OCRv4 (default)
  • Uses server models
  1. **--max-size <pixels>**: Adjust image processing size
  • Default: 640px
  • Larger values (e.g., 960, 1280) = better accuracy, slower
  • Smaller values (e.g., 480) = faster, may reduce accuracy
  1. **--hpi**: High-Performance Inference
  • Automatically selects best inference backend (Paddle Inference, OpenVINO, ONNX Runtime, TensorRT)
  • Requires HPI dependencies: paddleocr install_hpi_deps cpu/gpu
  • Best performance but requires additional setup

Examples

# Basic usage (uses all optimizations by default: fast + PP-OCRv4 + 640px + auto GPU)
uvx --from fast-paddleocr-mcp paddleocr-md photo.jpg

# Process with custom output
uvx --from fast-paddleocr-mcp paddleocr-md document.png -o extracted_text.md

# Better accuracy (slower) - disable fast mode and use PP-OCRv5
uvx --from fast-paddleocr-mcp paddleocr-md image.png --no-fast --ocr-version PP-OCRv5 --max-size 960

# Force CPU mode
uvx --from fast-paddleocr-mcp paddleocr-md image.png --cpu

# Use High-Performance Inference (requires HPI dependencies)
uvx --from fast-paddleocr-mcp paddleocr-md image.png --hpi

Output Format

The tool generates a markdown file containing:

  • Source image path
  • List of detected text (one per line)

Example output (test_image.png.md):

# OCR Result

**Source Image:** `test_image.png`

---

- HelloPaddleOcR
- 10000C

Requirements

  • Python >= 3.8
  • PaddleOCR
  • Pillow

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fast_paddleocr_mcp-0.1.3.tar.gz (10.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fast_paddleocr_mcp-0.1.3-py3-none-any.whl (9.2 kB view details)

Uploaded Python 3

File details

Details for the file fast_paddleocr_mcp-0.1.3.tar.gz.

File metadata

  • Download URL: fast_paddleocr_mcp-0.1.3.tar.gz
  • Upload date:
  • Size: 10.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for fast_paddleocr_mcp-0.1.3.tar.gz
Algorithm Hash digest
SHA256 1c8eab1b5475263c300b20201e49d46299d52368900ebedca6deaa7781f5420d
MD5 9f0af9ea17ea7bc39ffcbf672f427c57
BLAKE2b-256 c47fe3bf70cd2f8fef4ee9b5f62cba85b635041d6245734b516fa7366dcbbb8f

See more details on using hashes here.

File details

Details for the file fast_paddleocr_mcp-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for fast_paddleocr_mcp-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 ccb0123401531b46ebe3efcf11ddde6d587378e2667b3d21d11c19e227bfbc98
MD5 89a1fcb35ddae920f60450e2ee76ba7a
BLAKE2b-256 d1a2e1f1062a942b462825572ec6ccaf80664d5ce5880b426417ec3af6d33c42

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page