Skip to main content

Ichigo Whisper is a compact (22M parameters), open-source speech tokenizer for the whisper-medium model, designed to enhance performance on multilingual with minimal impact on its original English capabilities. Unlike models that output continuous embeddings, Ichigo Whisper compresses speech into discrete tokens, making it more compatible with large language models (LLMs) for immediate speech understanding.

Project description

🍰 Ichigo-Whisper.

About | Demo | Model Summary | Training

Homebrew ASR quantizer model

About

Ichigo-Whisper is a compact (22M parameters), open-source speech tokenizer designed to enhance the performance of the Whisper-medium model, particularly for multilingual, while maintaining strong English language capabilities.

Unlike models that output continuous embeddings, Ichigo-Whisper compresses speech into discrete tokens. This approach makes it more compatible with large language models (LLMs) for immediate speech understanding and downstream tasks.

Evaluation of Ichigo Whisper's performance

Key Features

  • Only 22M parameters, enabling deployment in resource-constrained environments.
  • Specifically trained to improve performance on languages with limited data.
  • Outputs discrete tokens, facilitating integration with LLMs.
  • Trained on ~400 hours of English and ~1000 hours of Vietnamese data, demonstrating strong performance in both languages.
  • Part of a larger family of models for multilingual speech processing.

Model Summary

Architecture

Ichigo-Whisper's architecture is inspired by the WhisperVQ model from WhisperSpeech. It is a quantizer built on top of the Whisper-medium model, transforming continuous audio embeddings into discrete codebook entries. This quantization process allows for more efficient integration with LLMs, enabling direct speech understanding without the need for intermediate text representation.

Codebook Initialization

We introduce a method for initializing the codebook weights in the VQ model. Instead of random initialization, we leverage the pre-trained weights from the WhisperVQ 7-language model. We then duplicate these codebooks and introduce small random noise to each copy. After training, we merge the original WhisperVQ 7-language codebooks back into the model.

Codebook initialization of Ichigo Whisper

Codebook Expansion Workflow:

# 1. Initial State
Codebook 512:  [512 codes + 1 mask token]
[C1 C2 C3 ... C512 M]

Codebook 2048: [2048 codes + 1 mask token]
[D1 D2 D3 ... D2048 M]

# 2. Remove Mask Token from 512
Codebook 512 (without mask):
[C1 C2 C3 ... C512]  # 512 codes

Codebook 2048 (keeps mask):
[D1 D2 D3 ... D2048 M]  # 2049 codes

# 3. Create New Empty Codebook
New Size = 512 + 2049 = 2561 codes
[_ _ _ ... _ _ _]  # 2561 empty slots

# 4. Merge Process
Step 2: Copy 2048+mask first
[D1 D2 D3 ... D2048 M | _ _ _ ... _ _ _ _ ]
 |----2049 codes----| |-----512 slots-----|

Step 2: Copy 512 codes after
[D1 D2 D3 ... D2048 M | C1 C2 C3 ... C512 |]
 |----2049 codes----| |-----512 codes-----|

For further details on ablation studies related to codebook initialization, please refer to this GitHub issue.

Two-Phase Training Methodology

We employ a two-phase training strategy to optimize Ichigo-Whisper's performance:

  • Phase 1: We train the model using a KL divergence loss against the output of the Whisper-medium model. This phase establishes a strong foundation and aligns the quantizer with the original model's representations.
  • Phase 2: Recognizing that solely relying on Whisper-medium's output can limit performance, we introduce further training in this phase.
  • Data Mixing: We mix Vietnamese and English data in a ratio of approximately 7:3 during training. This helps maintain English capabilities while significantly enhancing Vietnamese performance.

How to Get Started

PyPI

  1. Install python package
pip install ichigo-whisper
  1. Inference with your audio
import torch, torchaudio
from ichigo_whisper.demo.utils import load_model

# Load Ichigo Whisper
ichigo_model = load_model(
        ref="homebrewltd/ichigo-whisper:merge-medium-vi-2d-2560c-dim64.pth",
        size="merge-medium-vi-2d-2560c-dim64",
)
device = "cuda" if torch.cuda.is_available() else "cpu"
ichigo_model.ensure_whisper(device)
ichigo_model.to(device)

# Inference
wav, sr = torchaudio.load("path/to/your/audip")
if sr != 16000:
   wav = torchaudio.functional.resample(wav, sr, 16000)
transcribe = ichigo_model.inference(wav.to(device))
print(transcribe[0].text)

Installation from source

  1. Create virtual environment

    # venv
    python -m venv ichigo-whisper
    source ichigo-whisper/bin/activate
    
    # conda 
    conda create -n ichigo-whisper python=3.11
    conda activate ichigo-whisper                                                                                                                                                             
    
  2. Clone the repository and install requirement packages

    git clone https://github.com/janhq/WhisperSpeech.git
    cd WhisperSpeech/ichigo-whisper
    pip install -r requirements.txt
    cd src/ichigo-whisper
    
  3. Login Huggingface CLI and WandB (Optional for training)

    huggingface-cli login
    wandb login
    

Training

Modify config and run scripts

sh scripts/train_multi.sh

Testing

After training, modify inference config and run scripts

sh scripts/test.sh

Inference

python demo/inference.py -i path/to/your/audio.wav 

# Example 
# python demo/inference.py -i demo/samples/test.wav

Demo

python demo/app.py

Join Us

🍰 Ichigo Whisper is an open research project. We're looking for collaborators, and will likely move towards crowdsourcing speech datasets in the future.

Acknowledgement

  • WhisperSpeech: Text-to-speech model for synthetic audio generation
  • Gradio: A user-friendly library for building Ichigo-Whisper demo

You can try the demo directly in here.

Citation

@article{IchigoWhisper-2024,
  title={Ichigo Whisper},
  author={Homebrew Research},
  year=2024,
  month=December},
  url={https://huggingface.co/homebrewltd/Ichigo-whisper-v0.1}

Acknowledgement

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ichigo_whisper-2.3.0.tar.gz (5.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ichigo_whisper-2.3.0-py3-none-any.whl (5.0 MB view details)

Uploaded Python 3

File details

Details for the file ichigo_whisper-2.3.0.tar.gz.

File metadata

  • Download URL: ichigo_whisper-2.3.0.tar.gz
  • Upload date:
  • Size: 5.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.16

File hashes

Hashes for ichigo_whisper-2.3.0.tar.gz
Algorithm Hash digest
SHA256 ebd1d10e1a57a4268c6394f1b422a8cd7934c73bd258b87515ec526043f340d2
MD5 c3bc648ce8384f051d9ece1718d05155
BLAKE2b-256 5c91481bf7772fa1bf8a7236acd1ee02d7c31e4bd68d36d6e68c3070a25f625e

See more details on using hashes here.

File details

Details for the file ichigo_whisper-2.3.0-py3-none-any.whl.

File metadata

  • Download URL: ichigo_whisper-2.3.0-py3-none-any.whl
  • Upload date:
  • Size: 5.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.16

File hashes

Hashes for ichigo_whisper-2.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dc15725bd2f8179a90898d8d71afdb986760449a44334173e3515e2bf8efb5b4
MD5 92200950ce365da8085e662a689c6fa4
BLAKE2b-256 913863ca6eeea0383919b707f5c8ab972d78a37ed92320c7ee582a6d20f936d3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page