Skip to main content

Production-ready transcription and diarization pipeline with parallel processing

Project description

WhisperX-NeMo Pipeline

A production-ready transcription and diarization pipeline with parallel processing.

Features

  • Parallel Processing: Runs Whisper transcription and NeMo diarization simultaneously
  • Multiple Backends: Supports both faster-whisper and WhisperX
  • Speaker Diarization: Uses NeMo MSDD models for accurate speaker identification
  • Audio Source Separation: Optional vocal extraction using Demucs
  • Punctuation Restoration: Automatic punctuation using deep learning models
  • Memory Efficient: Proper GPU memory management and cleanup

Installation

pip install whisperx-nemo-pipeline

With constraints (recommended for production):

pip install whisperx-nemo-pipeline -c constraints.txt

Quick Start

from whisperx_nemo_pipeline import create_transcription_pipeline

# Create pipeline
pipeline = create_transcription_pipeline(
    audio_path="path/to/your/audio.wav",
    model_name="large-v2",
    device="cuda",  # or "cpu"
    stemming=True,  # Enable source separation
    backend="faster_whisper"  # or "whisperx"
)

# Process audio
transcript_path, srt_path, timing_info = pipeline.process()

print(f"Transcript saved to: {transcript_path}")
print(f"Subtitles saved to: {srt_path}")
print(f"Processing took: {timing_info['total_time']:.2f}s")

Advanced Usage

from whisperx_nemo_pipeline import TranscriptionPipeline, TranscriptionConfig

# Custom configuration
config = TranscriptionConfig(
    audio_path="path/to/audio.wav",
    model_name="large-v2",
    device="cuda",
    batch_size=8,
    language="en",  # or None for auto-detection
    stemming=True,
    suppress_numerals=False,
    backend="faster_whisper"
)

# Create pipeline with custom config
pipeline = TranscriptionPipeline(config)

# Process
transcript_path, srt_path, timing_info = pipeline.process()

Configuration Options

  • audio_path: Path to input audio file
  • model_name: Whisper model size ("tiny", "base", "small", "medium", "large-v2")
  • device: Computing device ("cuda" or "cpu")
  • batch_size: Batch size for inference (default: 4)
  • language: Language code or None for auto-detection
  • stemming: Enable audio source separation (default: True)
  • suppress_numerals: Suppress numerical tokens (default: False)
  • backend: "faster_whisper" or "whisperx"

Requirements

  • Python 3.8+
  • CUDA-capable GPU (recommended)
  • See requirements.txt for full dependency list

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whisperx_nemo_pipeline-1.0.5.tar.gz (104.9 kB view details)

Uploaded Source

File details

Details for the file whisperx_nemo_pipeline-1.0.5.tar.gz.

File metadata

  • Download URL: whisperx_nemo_pipeline-1.0.5.tar.gz
  • Upload date:
  • Size: 104.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for whisperx_nemo_pipeline-1.0.5.tar.gz
Algorithm Hash digest
SHA256 260b11461cbd89f2a267fe15e0c810ccb9b9a53a6f8e08b4727405c5ed883543
MD5 0929c9bc8cbc5bc8aaea21fdfc6dbadb
BLAKE2b-256 ea0989d2152c43e1e6fe5a1dabf23d8ccb624b49e69dc70bc933e36a62d80c80

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page