Skip to main content

FunASR: A Fundamental End-to-End Speech Recognition Toolkit

Project description

ONNXRuntime-python

Install funasr-onnx

install from pip

pip install -U funasr-onnx
# For the users in China, you could install with the command:
# pip install -U funasr-onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
# If you want to export .onnx file, you should install modelscope and funasr
pip install -U modelscope funasr
# For the users in China, you could install with the command:
# pip install -U modelscope funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple

or install from source code

git clone https://github.com/alibaba/FunASR.git && cd FunASR
cd funasr/runtime/python/onnxruntime
pip install -e ./
# For the users in China, you could install with the command:
# pip install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple

Inference with runtime

Speech Recognition

Paraformer

from funasr_onnx import Paraformer
from pathlib import Path

model_dir = "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
model = Paraformer(model_dir, batch_size=1, quantize=True)

wav_path = ['{}/.cache/modelscope/hub/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav'.format(Path.home())]

result = model(wav_path)
print(result)
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • batch_size: 1 (Default), the batch size duration inference
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: wav formt file, support formats: str, np.ndarray, List[str]

Output: List[str]: recognition result

Paraformer-online

Voice Activity Detection

FSMN-VAD

from funasr_onnx import Fsmn_vad
from pathlib import Path

model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home())

model = Fsmn_vad(model_dir)

result = model(wav_path)
print(result)
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • batch_size: 1 (Default), the batch size duration inference
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: wav formt file, support formats: str, np.ndarray, List[str]

Output: List[str]: recognition result

FSMN-VAD-online

from funasr_onnx import Fsmn_vad_online
import soundfile
from pathlib import Path

model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home())

model = Fsmn_vad_online(model_dir)


##online vad
speech, sample_rate = soundfile.read(wav_path)
speech_length = speech.shape[0]
#
sample_offset = 0
step = 1600
param_dict = {'in_cache': []}
for sample_offset in range(0, speech_length, min(step, speech_length - sample_offset)):
    if sample_offset + step >= speech_length - 1:
        step = speech_length - sample_offset
        is_final = True
    else:
        is_final = False
    param_dict['is_final'] = is_final
    segments_result = model(audio_in=speech[sample_offset: sample_offset + step],
                            param_dict=param_dict)
    if segments_result:
        print(segments_result)
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • batch_size: 1 (Default), the batch size duration inference
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: wav formt file, support formats: str, np.ndarray, List[str]

Output: List[str]: recognition result

Punctuation Restoration

CT-Transformer

from funasr_onnx import CT_Transformer

model_dir = "damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch"
model = CT_Transformer(model_dir)

text_in="跨境河流是养育沿岸人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流问题上的关切愿意进一步完善双方联合工作机制凡是中方能做的我们都会去做而且会做得更好我请印度朋友们放心中国在上游的任何开发利用都会经过科学规划和论证兼顾上下游的利益"
result = model(text_in)
print(result[0])
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: str, raw text of asr result

Output: List[str]: recognition result

CT-Transformer-online

from funasr_onnx import CT_Transformer_VadRealtime

model_dir = "damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727"
model = CT_Transformer_VadRealtime(model_dir)

text_in  = "跨境河流是养育沿岸|人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员|在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险|向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流>问题上的关切|愿意进一步完善双方联合工作机制|凡是|中方能做的我们|都会去做而且会做得更好我请印度朋友们放心中国在上游的|任何开发利用都会经过科学|规划和论证兼顾上下游的利益"

vads = text_in.split("|")
rec_result_all=""
param_dict = {"cache": []}
for vad in vads:
    result = model(vad, param_dict=param_dict)
    rec_result_all += result[0]

print(rec_result_all)
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: str, raw text of asr result

Output: List[str]: recognition result

Performance benchmark

Please ref to benchmark

Acknowledge

  1. This project is maintained by FunASR community.
  2. We partially refer SWHL for onnxruntime (only for paraformer model).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

funasr_onnx-0.4.1.tar.gz (33.8 kB view details)

Uploaded Source

Built Distribution

funasr_onnx-0.4.1-py3-none-any.whl (39.3 kB view details)

Uploaded Python 3

File details

Details for the file funasr_onnx-0.4.1.tar.gz.

File metadata

  • Download URL: funasr_onnx-0.4.1.tar.gz
  • Upload date:
  • Size: 33.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.15

File hashes

Hashes for funasr_onnx-0.4.1.tar.gz
Algorithm Hash digest
SHA256 c3b4154b0e3c035ec5551d37ef228be702be9ce542eca50b50db439b3e3fabcd
MD5 483384c6bab7e53e28dedbba9a0a7fdd
BLAKE2b-256 19ad08ccdabd46d443c613ddfc6903cb68ac71f966b8bf4eb974d1c06e2b7520

See more details on using hashes here.

File details

Details for the file funasr_onnx-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: funasr_onnx-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 39.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.15

File hashes

Hashes for funasr_onnx-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 180b22a8d12b9b87b088849131b9d07c1192c1c3bfc1628d32b7767248559009
MD5 96c7cb28da75a512990d2831f6ba98eb
BLAKE2b-256 2c3331b0fd9b0b0a90a8d2accaf9fffa5a0265ba21b9414e3b6a285a070eb5bd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page