Skip to main content

FunASR: A Fundamental End-to-End Speech Recognition Toolkit

Project description

ONNXRuntime-python

Install funasr-onnx

install from pip

pip install -U funasr-onnx
# For the users in China, you could install with the command:
# pip install -U funasr-onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
# If you want to export .onnx file, you should install modelscope and funasr
pip install -U modelscope funasr
# For the users in China, you could install with the command:
# pip install -U modelscope funasr -i https://mirror.sjtu.edu.cn/pypi/web/simple

or install from source code

git clone https://github.com/alibaba/FunASR.git && cd FunASR
cd funasr/runtime/python/onnxruntime
pip install -e ./
# For the users in China, you could install with the command:
# pip install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple

Inference with runtime

Speech Recognition

Paraformer

from funasr_onnx import Paraformer
from pathlib import Path

model_dir = "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
model = Paraformer(model_dir, batch_size=1, quantize=True)

wav_path = ['{}/.cache/modelscope/hub/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav'.format(Path.home())]

result = model(wav_path)
print(result)
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • batch_size: 1 (Default), the batch size duration inference
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: wav formt file, support formats: str, np.ndarray, List[str]

Output: List[str]: recognition result

Paraformer-online

Voice Activity Detection

FSMN-VAD

from funasr_onnx import Fsmn_vad
from pathlib import Path

model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home())

model = Fsmn_vad(model_dir)

result = model(wav_path)
print(result)
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • batch_size: 1 (Default), the batch size duration inference
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: wav formt file, support formats: str, np.ndarray, List[str]

Output: List[str]: recognition result

FSMN-VAD-online

from funasr_onnx import Fsmn_vad_online
import soundfile
from pathlib import Path

model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home())

model = Fsmn_vad_online(model_dir)


##online vad
speech, sample_rate = soundfile.read(wav_path)
speech_length = speech.shape[0]
#
sample_offset = 0
step = 1600
param_dict = {'in_cache': []}
for sample_offset in range(0, speech_length, min(step, speech_length - sample_offset)):
    if sample_offset + step >= speech_length - 1:
        step = speech_length - sample_offset
        is_final = True
    else:
        is_final = False
    param_dict['is_final'] = is_final
    segments_result = model(audio_in=speech[sample_offset: sample_offset + step],
                            param_dict=param_dict)
    if segments_result:
        print(segments_result)
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • batch_size: 1 (Default), the batch size duration inference
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: wav formt file, support formats: str, np.ndarray, List[str]

Output: List[str]: recognition result

Punctuation Restoration

CT-Transformer

from funasr_onnx import CT_Transformer

model_dir = "damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch"
model = CT_Transformer(model_dir)

text_in="跨境河流是养育沿岸人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流问题上的关切愿意进一步完善双方联合工作机制凡是中方能做的我们都会去做而且会做得更好我请印度朋友们放心中国在上游的任何开发利用都会经过科学规划和论证兼顾上下游的利益"
result = model(text_in)
print(result[0])
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: str, raw text of asr result

Output: List[str]: recognition result

CT-Transformer-online

from funasr_onnx import CT_Transformer_VadRealtime

model_dir = "damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727"
model = CT_Transformer_VadRealtime(model_dir)

text_in  = "跨境河流是养育沿岸|人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员|在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险|向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流>问题上的关切|愿意进一步完善双方联合工作机制|凡是|中方能做的我们|都会去做而且会做得更好我请印度朋友们放心中国在上游的|任何开发利用都会经过科学|规划和论证兼顾上下游的利益"

vads = text_in.split("|")
rec_result_all=""
param_dict = {"cache": []}
for vad in vads:
    result = model(vad, param_dict=param_dict)
    rec_result_all += result[0]

print(rec_result_all)
  • model_dir: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should contain model.onnx, config.yaml, am.mvn
  • device_id: -1 (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
  • quantize: False (Default), load the model of model.onnx in model_dir. If set True, load the model of model_quant.onnx in model_dir
  • intra_op_num_threads: 4 (Default), sets the number of threads used for intraop parallelism on CPU

Input: str, raw text of asr result

Output: List[str]: recognition result

Performance benchmark

Please ref to benchmark

Acknowledge

  1. This project is maintained by FunASR community.
  2. We partially refer SWHL for onnxruntime (only for paraformer model).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

funasr_onnx-0.3.0.tar.gz (30.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

funasr_onnx-0.3.0-py3-none-any.whl (34.3 kB view details)

Uploaded Python 3

File details

Details for the file funasr_onnx-0.3.0.tar.gz.

File metadata

  • Download URL: funasr_onnx-0.3.0.tar.gz
  • Upload date:
  • Size: 30.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.15

File hashes

Hashes for funasr_onnx-0.3.0.tar.gz
Algorithm Hash digest
SHA256 065b99b927ad7924488aecc77e97369bbfd3e73e6b5f880186edce967b3364d6
MD5 882a99fe0f029396678244a507d2165b
BLAKE2b-256 7092b49d7071462229760dd9579f7f8d85bcda16e214ea3808d8c8293b3fbbdc

See more details on using hashes here.

File details

Details for the file funasr_onnx-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: funasr_onnx-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 34.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.15

File hashes

Hashes for funasr_onnx-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 10eff1c86e9b09ae16d49e0e9eeff7c837c287286db9ff6d1c2164ce312718c5
MD5 5f6c60143a90307e255b5c929ca0a6a2
BLAKE2b-256 3b72146f8c1b6e438bc5329220bb2d2a832f3c9054085ea3a6a00e511ddd613d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page