Skip to main content

Production-ready transcription and diarization pipeline with parallel processing

Project description

WhisperX-NeMo Pipeline

A production-ready transcription and diarization pipeline with parallel processing.

Features

  • Parallel Processing: Runs Whisper transcription and NeMo diarization simultaneously
  • Multiple Backends: Supports both faster-whisper and WhisperX
  • Speaker Diarization: Uses NeMo MSDD models for accurate speaker identification
  • Audio Source Separation: Optional vocal extraction using Demucs
  • Punctuation Restoration: Automatic punctuation using deep learning models
  • Memory Efficient: Proper GPU memory management and cleanup

Installation

pip install whisperx-nemo-pipeline

With constraints (recommended for production):

pip install whisperx-nemo-pipeline -c constraints.txt

Quick Start

from whisperx_nemo_pipeline import create_transcription_pipeline

# Create pipeline
pipeline = create_transcription_pipeline(
    audio_path="path/to/your/audio.wav",
    model_name="large-v2",
    device="cuda",  # or "cpu"
    stemming=True,  # Enable source separation
    backend="faster_whisper"  # or "whisperx"
)

# Process audio
transcript_path, srt_path, timing_info = pipeline.process()

print(f"Transcript saved to: {transcript_path}")
print(f"Subtitles saved to: {srt_path}")
print(f"Processing took: {timing_info['total_time']:.2f}s")

Advanced Usage

from whisperx_nemo_pipeline import TranscriptionPipeline, TranscriptionConfig

# Custom configuration
config = TranscriptionConfig(
    audio_path="path/to/audio.wav",
    model_name="large-v2",
    device="cuda",
    batch_size=8,
    language="en",  # or None for auto-detection
    stemming=True,
    suppress_numerals=False,
    backend="faster_whisper"
)

# Create pipeline with custom config
pipeline = TranscriptionPipeline(config)

# Process
transcript_path, srt_path, timing_info = pipeline.process()

Configuration Options

  • audio_path: Path to input audio file
  • model_name: Whisper model size ("tiny", "base", "small", "medium", "large-v2")
  • device: Computing device ("cuda" or "cpu")
  • batch_size: Batch size for inference (default: 4)
  • language: Language code or None for auto-detection
  • stemming: Enable audio source separation (default: True)
  • suppress_numerals: Suppress numerical tokens (default: False)
  • backend: "faster_whisper" or "whisperx"

Requirements

  • Python 3.8+
  • CUDA-capable GPU (recommended)
  • See requirements.txt for full dependency list

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whisperx_nemo_pipeline-1.0.4.tar.gz (104.9 kB view details)

Uploaded Source

File details

Details for the file whisperx_nemo_pipeline-1.0.4.tar.gz.

File metadata

  • Download URL: whisperx_nemo_pipeline-1.0.4.tar.gz
  • Upload date:
  • Size: 104.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for whisperx_nemo_pipeline-1.0.4.tar.gz
Algorithm Hash digest
SHA256 23b88f2ebe24fd8d343c46451e9325e7e85960e687d0e8691ccfd81c5c2ef354
MD5 3acc9485b755e48b377d03ef25cfd990
BLAKE2b-256 0263d73700323c1fb8383ef21ac655a9f71e9a11868980322e61275a2c04a66d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page