Skip to main content

ONNX Wrapper for ESPnet

Project description

espnet_onnx

ESPnet without PyTorch!

Utility library to easily export, quantize, and optimize espnet models to onnx format. There is no need to install PyTorch or ESPnet on your machine if you already have exported files!

espnet_onnx demo in Colab

Now demonstration notebook is available in google colab!

  • Simple ASR demo: Open In Colab
  • Simple TTS demo: Open In Colab

Install

  1. espnet_onnx can be installed with pip
pip install espnet_onnx
  1. If you want to export pretrained model, you need to install torch>=1.11.0, espnet, espnet_model_zoo, onnx additionally. onnx==1.12.0 might cause some errors. If you got an error while inference or exporting, please consider downgrading the onnx version.

Install guide for developers

  1. Clone this repository.
git clone git@github.com:espnet/espnet_onnx.git
  1. Create virtual environment.
cd tools
make venv export
  1. Activate virtual environment and install torch if required.
. tools/venv/bin/activate

# Please reference official installation guide of PyTorch.
pip install torch
  1. Clone the s3prl repository and install with pip.
cd tools
git clone https://github.com/s3prl/s3prl
cd s3prl
pip install .
  1. Install warp_transducer for developing transducer model.
cd tools
git clone --single-branch --branch espnet_v1.1 https://github.com/b-flo/warp-transducer.git
cd warp-transducer
mkdir build
# Please set WITH_OMP to ON or OFF.
cd build && cmake -DWITH_OMP="ON" .. && make
cd pytorch_binding && python3 -m pip install -e .
  1. If you want to develop optimization, you also need to develop onnxruntime. Please clone the onnxruntime repository.

  2. Since espnet==202308(latest on v0.2.0 release) requires protobuf<=3.20.1 while the latest onnx requires protobuf>=3.20.2, you may get error with installation. In this case, first, install the espnet==202308, update protobuf to 3.20.3, and then install other libraries.

Usage

Export models

  1. espnet_onnx can export pretrained model published on espnet_model_zoo. By default, exported files will be stored in ${HOME}/.cache/espnet_onnx/<tag_name>.
from espnet2.bin.asr_inference import Speech2Text
from espnet_onnx.export import ASRModelExport

m = ASRModelExport()

# download with espnet_model_zoo and export from pretrained model
m.export_from_pretrained('<tag name>', quantize=True)

# export from trained model
speech2text = Speech2Text(args)
m.export(speech2text, '<tag name>', quantize=True)
  1. You can export pretrained model from zipped file. The zipped file should contain meta.yaml.
from espnet_onnx.export import ASRModelExport

m = ASRModelExport()
m.export_from_zip(
  'path/to/the/zipfile',
  tag_name='tag_name_for_zipped_model',
  quantize=True
)
  1. You can set some configuration for export. The available configurations are shown in the details for each models.
from espnet_onnx.export import ASRModelExport

m = ASRModelExport()
# Set maximum sequence length to 3000
m.set_export_config(max_seq_len=3000)
m.export_from_zip(
  'path/to/the/zipfile',
  tag_name='tag_name_for_zipped_model',
)
  1. You can easily optimize your model by using the optimize option. If you want to fully optimize your model, you need to install the custom version of onnxruntime from here. Please read this document for more detail.
from espnet_onnx.export import ASRModelExport

m = ASRModelExport()
m.export_from_zip(
  'path/to/the/zipfile',
  tag_name='tag_name_for_zipped_model',
  optimize=True,
  quantize=True
)
  1. You can export model from the command line.
python -m espnet_onnx.export \
  --model_type asr \
  --input ${path_to_zip} \
  --tag transformer_lm \
  --apply_optimize \
  --apply_quantize

Inference

  1. For inference, tag_name or model_dir is used to load onnx file. tag_name has to be defined in tag_config.yaml
import librosa
from espnet_onnx import Speech2Text

speech2text = Speech2Text(tag_name='<tag name>')
# speech2text = Speech2Text(model_dir='path to the onnx directory')

y, sr = librosa.load('sample.wav', sr=16000)
nbest = speech2text(y)
  1. For streaming asr, you can use StreamingSpeech2Text class. The speech length should be the same as StreamingSpeech2Text.hop_size
from espnet_onnx import StreamingSpeech2Text

stream_asr = StreamingSpeech2Text(tag_name)

# start streaming asr
stream_asr.start()
while streaming:
  wav = <some code to get wav>
  assert len(wav) == stream_asr.hop_size
  stream_text = stream_asr(wav)[0][0]

# You can get non-streaming asr result with end function
nbest = stream_asr.end()

You can also simulate streaming model with your wav file with simulate function. Passing True as the second argument will show the streaming text as the following code.

import librosa
from espnet_onnx import StreamingSpeech2Text

stream_asr = StreamingSpeech2Text(tag_name)
y, sr = librosa.load('path/to/wav', sr=16000)
nbest = stream_asr.simulate(y, True)
# Processing audio with 6 processes.
# Result at position 0 :
# Result at position 1 :
# Result at position 2 : this
# Result at position 3 : this is
# Result at position 4 : this is a
# Result at position 5 : this is a
print(nbest[0][0])
# 'this is a pen'
  1. If you installed the custom version of onnxruntime, you can run optimized model for inference. You don't have to change any code from the above. If the model was optimized, then espnet_onnx would automatically load the optimized version.

  2. You can use only hubert model for your frontend.

from espnet_onnx.export import ASRModelExport

# export your model
tag_name = 'ESPnet pretrained model with hubert'
m = ASRModelExport()
m.export_from_pretrained(tag_name, optimize=True)

# load only the frontend model
from espnet_onnx.asr.frontend import Frontend
frontend = Frontend.get_frontend(tag_name)

# use the model in your application
import librosa
y, sr = librosa.load('wav file')
# y: (B, T)
# y_len: (B,)
feats = frontend(y[None,:], np.array([len(y)]))
  1. If you installed torch in your environment, you can use frontend in your training.
from espnet_onnx.asr.frontend import TorchFrontend
frontend = TorchFrontend.get_frontend(tag_name) # load pretrained frontend model

# use the model while training
import librosa
y, sr = librosa.load('wav file')

# You need to place your data on GPU,
# and specify the output shape in tuple
y = torch.Tensor(y).unsqueeze(0).to('cuda') # (1, wav_length)
output_shape = (batch_size, feat_length, feats_dims)
feats = frontend(y, y.size(1), output_shape)

Text2Speech inference

  1. You can export TTS models as ASR models.
from espnet2.bin.tts_inference import Text2Speech
from espnet_onnx.export import TTSModelExport

m = TTSModelExport()

# download with espnet_model_zoo and export from pretrained model
m.export_from_pretrained('<tag name>', quantize=True)

# export from trained model
text2speech = Text2Speech(args)
m.export(text2speech, '<tag name>', quantize=True)
  1. You can generate wav files with just simply using the Text2Speech class.
from espnet_onnx import Text2Speech

tag_name = 'kan-bayashi/ljspeech_vits'
text2speech = Text2Speech(tag_name, use_quantized=True)

text = 'Hello world!'
output_dict = text2speech(text) # inference with onnx model.
wav = output_dict['wav']

How to use GPU on espnet_onnx

Install dependency.

First, we need onnxruntime-gpu library, instead of onnxruntime. Please follow this article to select and install the correct version of onnxruntime-gpu, depending on your CUDA version.

Inference on GPU

Now you can speedup the inference speed with GPU. All you need is to select the correct providers, and give it to the Speech2Text or StreamingSpeech2Text instance. See this article for more information about providers.

import librosa
from espnet_onnx import Speech2Text

PROVIDERS = ['CUDAExecutionProvider']
tag_name = 'some_tag_name'

speech2text = Speech2Text(
  tag_name,
  providers=PROVIDERS
)
y, sr = librosa.load('path/to/wav', sr=16000)
nbest = speech2text(y) # runs on GPU.

Note that some quantized models are not supported for GPU computation. If you got an error with quantized model, please try not-quantized model.

Changes from ESPNet

To avoid the cache problem, I modified some scripts from the original espnet implementation.

  1. Add <blank> before <sos>

  2. Give some torch.zeros() arrays to the model.

  3. Remove the first token in post process. (remove blank)

  4. Replace make_pad_mask into new implementation, which can be converted into onnx format.

  5. Removed extend_pe() from positional encoding module. The length of pe is 512 by default.

Supported Archs

ASR: Supported architecture for ASR

TTS: Supported architecture for TTS

Developer's Guide

ASR: Developer's Guide

References

COPYRIGHT

Copyright (c) 2022 Maso Someki

Released under MIT licence

Author

Masao Someki

contact: masao.someki@gmail.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

espnet_onnx-0.2.1.tar.gz (97.8 kB view details)

Uploaded Source

Built Distribution

espnet_onnx-0.2.1-py3-none-any.whl (144.4 kB view details)

Uploaded Python 3

File details

Details for the file espnet_onnx-0.2.1.tar.gz.

File metadata

  • Download URL: espnet_onnx-0.2.1.tar.gz
  • Upload date:
  • Size: 97.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.10.12

File hashes

Hashes for espnet_onnx-0.2.1.tar.gz
Algorithm Hash digest
SHA256 32e4b375f119f10ce4b3bf088bdc9e6ba7cbf3a99b5bd577c3ddf22dba0210c5
MD5 06c8d15c40f16e244bb1d9d005e51fbb
BLAKE2b-256 b1215ce746419dabf2bdeac8bd6dc75d74915b35ea7e45d7b645f1dae1f96d72

See more details on using hashes here.

File details

Details for the file espnet_onnx-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: espnet_onnx-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 144.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.10.12

File hashes

Hashes for espnet_onnx-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 325a68236b2a95d11d779edd790bf0372cdcc7a52ece5e9e2a1005953c6e3338
MD5 a00b6ca9f09baa2ec14a2a081169bf75
BLAKE2b-256 5f93ed3983c38ae7d16e37a73a1cd66c8ab384c682d3314c76b6acf640850406

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page