Skip to main content

AI-powered video extraction API for metadata, transcripts, faces, scenes, objects, and more

Project description

Media Engine

AI-powered video metadata extraction API for small TV stations and content creators. Provides a "file in → JSON out" API that extracts metadata, transcripts, faces, scenes, objects, CLIP embeddings, OCR text, and camera motion from video files.

Installation

# Apple Silicon Mac
pip install media-engine[mlx]
pip install https://github.com/harperreed/mlx_clip/archive/refs/heads/main.zip

# NVIDIA GPU
pip install media-engine[cuda]

# CPU only
pip install media-engine[cpu]

# Speaker diarization (optional, run after platform install)
pip install pyannote-audio
pip install --upgrade torch torchaudio torchvision

# Start the server
meng-server

Requirements: Python 3.12+, ffmpeg

Note: Speaker diarization requires a HuggingFace token with access to the pyannote models. Set hf_token in ~/.config/polybos/config.json. pyannote-audio pins an older torch version, so the two-step install above prevents a torch downgrade.

Features

  • Metadata extraction - Duration, resolution, codec, GPS, device info (modular per-manufacturer)
  • Transcription - Whisper with speaker diarization
  • Face detection - DeepFace with embedding clustering
  • Scene detection - PySceneDetect content-aware boundaries
  • Object detection - YOLO or Qwen VLM
  • CLIP embeddings - Per-scene similarity search
  • OCR - PaddleOCR text extraction
  • Motion analysis - Camera pan/tilt/zoom detection
  • Shot type - Aerial, interview, b-roll classification

Quick Start

Docker (Recommended)

# Clone and run
git clone https://github.com/thetrainroom/media-engine.git
cd media-engine

# Start the server
docker compose up -d

# Test
curl http://localhost:8001/health

Mount your media folder:

MEDIA_PATH=/path/to/videos docker compose up -d

For NVIDIA GPU support (uses Dockerfile.cuda):

docker compose --profile gpu up -d

Or build manually:

docker build -f Dockerfile.cuda -t media-engine-gpu .
docker run -p 8001:8001 --gpus all -v /path/to/media:/media media-engine-gpu

Apple Silicon (Recommended: Native)

Docker on macOS runs in a Linux VM without Metal/MPS access. For GPU acceleration on Apple Silicon, run natively:

pip install media-engine[mlx]
pip install https://github.com/harperreed/mlx_clip/archive/refs/heads/main.zip
meng-server

A Dockerfile.mlx is provided for consistency, but will use CPU in Docker:

docker compose --profile mlx up -d

Development Installation

# Mac Apple Silicon
make install-mlx

# NVIDIA GPU
make install-cuda

# CPU only
make install-cpu

# Speaker diarization (optional)
make install-diarization

# Run server with hot reload
uvicorn media_engine.main:app --reload --port 8001

Or without Make:

pip install -e ".[mlx]"          # or cuda, cpu
pip install https://github.com/harperreed/mlx_clip/archive/refs/heads/main.zip  # mlx only
pip install pyannote-audio       # optional: speaker diarization
pip install --upgrade torch torchaudio torchvision  # restore torch version

API Usage

Extract metadata from a video

curl -X POST http://localhost:8001/extract \
  -H "Content-Type: application/json" \
  -d '{
    "file": "/media/video.mp4",
    "enable_metadata": true,
    "enable_transcript": true,
    "enable_faces": false,
    "enable_scenes": true,
    "enable_objects": false,
    "enable_clip": false,
    "enable_ocr": false,
    "enable_motion": false
  }'

Endpoints

Endpoint Method Description
/health GET Health check
/extract POST Extract features from video
/extractors GET List available extractors

Supported Devices

The metadata extractor automatically detects camera/device type:

Manufacturer Models Features
DJI Mavic, Air, Mini, Pocket, Osmo, Action GPS from SRT, color profiles
Sony PXW, FX, Alpha, ZV series XML sidecar, S-Log, GPS
Canon Cinema EOS, EOS R XML sidecar
Apple iPhone, iPad QuickTime metadata, GPS
Blackmagic Pocket, URSA, BRAW ProApps metadata, BRAW detection
RED DSMC2, V-RAPTOR, KOMODO R3D native support
ARRI ALEXA, ALEXA Mini, AMIRA ARRIRAW detection
Insta360 X3, X4, ONE RS, GO 3 360 video detection
FFmpeg OBS, Handbrake, etc. Encoder detection

Adding new manufacturers is easy - create a module in media_engine/extractors/metadata/.

Architecture

media_engine/
├── main.py              # FastAPI app
├── config.py            # Settings and platform detection
├── schemas.py           # Pydantic models
└── extractors/
    ├── metadata/        # Modular per-manufacturer
    │   ├── dji.py
    │   ├── sony.py
    │   ├── apple.py
    │   └── ...
    ├── transcribe.py    # Whisper (MLX/CUDA/CPU)
    ├── faces.py         # DeepFace + embeddings
    ├── scenes.py        # PySceneDetect
    ├── objects.py       # YOLO
    ├── objects_qwen.py  # Qwen VLM
    ├── clip.py          # CLIP embeddings
    ├── ocr.py           # PaddleOCR
    └── motion.py        # Optical flow analysis

Configuration

Settings are stored in ~/.config/polybos/config.json:

{
  "whisper_model": "large-v3",
  "fallback_language": "en",
  "hf_token": null,
  "face_sample_fps": 1.0,
  "object_sample_fps": 2.0,
  "ocr_languages": ["en", "no", "de", "fr", "es"]
}

Set hf_token to enable speaker diarization (requires accepting license at pyannote).

Development

# Install dev dependencies
pip install -e ".[dev]"

# Lint
ruff check media_engine/

# Type check
pyright media_engine/

# Test
export TEST_VIDEO_PATH=/path/to/test.mp4
pytest tests/

Contributing

Contributions welcome! To add support for a new camera manufacturer:

  1. Create media_engine/extractors/metadata/yourmanufacturer.py
  2. Implement detect() and extract() methods
  3. Register with register_extractor("name", YourExtractor())
  4. Import in metadata/__init__.py

See existing modules for examples.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

media_engine-0.3.2.tar.gz (176.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

media_engine-0.3.2-py3-none-any.whl (165.1 kB view details)

Uploaded Python 3

File details

Details for the file media_engine-0.3.2.tar.gz.

File metadata

  • Download URL: media_engine-0.3.2.tar.gz
  • Upload date:
  • Size: 176.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for media_engine-0.3.2.tar.gz
Algorithm Hash digest
SHA256 ad2c3c27dc5388b4e42685bc01da5bd061a128d1dd4339d33eade9d06f390564
MD5 636efab54bd839f416dede873046f61b
BLAKE2b-256 76c6f9bf7684104ccf65b1ccfaf94a5a1ce4474c4cbd4a091e242b7173b31428

See more details on using hashes here.

Provenance

The following attestation bundles were made for media_engine-0.3.2.tar.gz:

Publisher: release.yml on thetrainroom/media-engine

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file media_engine-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: media_engine-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 165.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for media_engine-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 ee76bba5666dfb3f35984758760d5777e3b1415d87aa025cb48533ea06e9a23a
MD5 cfaf1e361d5fe034536364af522f1b29
BLAKE2b-256 3a8e6842621e4bf2503c7525d287b244f4423f0a8fdbcedf9415c4d58102a50e

See more details on using hashes here.

Provenance

The following attestation bundles were made for media_engine-0.3.2-py3-none-any.whl:

Publisher: release.yml on thetrainroom/media-engine

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page