FunASR: A Fundamental End-to-End Speech Recognition Toolkit
Project description
ONNXRuntime-python
Install funasr_onnx
install from pip
pip install -U funasr_onnx
# For the users in China, you could install with the command:
# pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
or install from source code
git clone https://github.com/alibaba/FunASR.git && cd FunASR
cd funasr/runtime/python/onnxruntime
pip install -e ./
# For the users in China, you could install with the command:
# pip install -e ./ -i https://mirror.sjtu.edu.cn/pypi/web/simple
Inference with runtime
Speech Recognition
Paraformer
from funasr_onnx import Paraformer
from pathlib import Path
model_dir = "damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch"
model = Paraformer(model_dir, batch_size=1, quantize=True)
wav_path = ['{}/.cache/modelscope/hub/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/example/asr_example.wav'.format(Path.home())]
result = model(wav_path)
print(result)
model_dir
: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should containmodel.onnx
,config.yaml
,am.mvn
batch_size
:1
(Default), the batch size duration inferencedevice_id
:-1
(Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)quantize
:False
(Default), load the model ofmodel.onnx
inmodel_dir
. If setTrue
, load the model ofmodel_quant.onnx
inmodel_dir
intra_op_num_threads
:4
(Default), sets the number of threads used for intraop parallelism on CPU
Input: wav formt file, support formats: str, np.ndarray, List[str]
Output: List[str]
: recognition result
Paraformer-online
Voice Activity Detection
FSMN-VAD
from funasr_onnx import Fsmn_vad
from pathlib import Path
model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home())
model = Fsmn_vad(model_dir)
result = model(wav_path)
print(result)
model_dir
: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should containmodel.onnx
,config.yaml
,am.mvn
batch_size
:1
(Default), the batch size duration inferencedevice_id
:-1
(Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)quantize
:False
(Default), load the model ofmodel.onnx
inmodel_dir
. If setTrue
, load the model ofmodel_quant.onnx
inmodel_dir
intra_op_num_threads
:4
(Default), sets the number of threads used for intraop parallelism on CPU
Input: wav formt file, support formats: str, np.ndarray, List[str]
Output: List[str]
: recognition result
FSMN-VAD-online
from funasr_onnx import Fsmn_vad_online
import soundfile
from pathlib import Path
model_dir = "damo/speech_fsmn_vad_zh-cn-16k-common-pytorch"
wav_path = '{}/.cache/modelscope/hub/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/example/vad_example.wav'.format(Path.home())
model = Fsmn_vad_online(model_dir)
##online vad
speech, sample_rate = soundfile.read(wav_path)
speech_length = speech.shape[0]
#
sample_offset = 0
step = 1600
param_dict = {'in_cache': []}
for sample_offset in range(0, speech_length, min(step, speech_length - sample_offset)):
if sample_offset + step >= speech_length - 1:
step = speech_length - sample_offset
is_final = True
else:
is_final = False
param_dict['is_final'] = is_final
segments_result = model(audio_in=speech[sample_offset: sample_offset + step],
param_dict=param_dict)
if segments_result:
print(segments_result)
model_dir
: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should containmodel.onnx
,config.yaml
,am.mvn
batch_size
:1
(Default), the batch size duration inferencedevice_id
:-1
(Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)quantize
:False
(Default), load the model ofmodel.onnx
inmodel_dir
. If setTrue
, load the model ofmodel_quant.onnx
inmodel_dir
intra_op_num_threads
:4
(Default), sets the number of threads used for intraop parallelism on CPU
Input: wav formt file, support formats: str, np.ndarray, List[str]
Output: List[str]
: recognition result
Punctuation Restoration
CT-Transformer
from funasr_onnx import CT_Transformer
model_dir = "damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch"
model = CT_Transformer(model_dir)
text_in="跨境河流是养育沿岸人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流问题上的关切愿意进一步完善双方联合工作机制凡是中方能做的我们都会去做而且会做得更好我请印度朋友们放心中国在上游的任何开发利用都会经过科学规划和论证兼顾上下游的利益"
result = model(text_in)
print(result[0])
model_dir
: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should containmodel.onnx
,config.yaml
,am.mvn
device_id
:-1
(Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)quantize
:False
(Default), load the model ofmodel.onnx
inmodel_dir
. If setTrue
, load the model ofmodel_quant.onnx
inmodel_dir
intra_op_num_threads
:4
(Default), sets the number of threads used for intraop parallelism on CPU
Input: str
, raw text of asr result
Output: List[str]
: recognition result
CT-Transformer-online
from funasr_onnx import CT_Transformer_VadRealtime
model_dir = "damo/punc_ct-transformer_zh-cn-common-vad_realtime-vocab272727"
model = CT_Transformer_VadRealtime(model_dir)
text_in = "跨境河流是养育沿岸|人民的生命之源长期以来为帮助下游地区防灾减灾中方技术人员|在上游地区极为恶劣的自然条件下克服巨大困难甚至冒着生命危险|向印方提供汛期水文资料处理紧急事件中方重视印方在跨境河流>问题上的关切|愿意进一步完善双方联合工作机制|凡是|中方能做的我们|都会去做而且会做得更好我请印度朋友们放心中国在上游的|任何开发利用都会经过科学|规划和论证兼顾上下游的利益"
vads = text_in.split("|")
rec_result_all=""
param_dict = {"cache": []}
for vad in vads:
result = model(vad, param_dict=param_dict)
rec_result_all += result[0]
print(rec_result_all)
model_dir
: model_name in modelscope or local path downloaded from modelscope. If the local path is set, it should containmodel.onnx
,config.yaml
,am.mvn
device_id
:-1
(Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)quantize
:False
(Default), load the model ofmodel.onnx
inmodel_dir
. If setTrue
, load the model ofmodel_quant.onnx
inmodel_dir
intra_op_num_threads
:4
(Default), sets the number of threads used for intraop parallelism on CPU
Input: str
, raw text of asr result
Output: List[str]
: recognition result
Performance benchmark
Please ref to benchmark
Acknowledge
- This project is maintained by FunASR community.
- We partially refer SWHL for onnxruntime (only for paraformer model).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file funasr_onnx-0.1.1.tar.gz
.
File metadata
- Download URL: funasr_onnx-0.1.1.tar.gz
- Upload date:
- Size: 26.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8314e89a5a95ba2161fa3620dc2eafb4be76fbd097cf74e30292d55d5f584110 |
|
MD5 | b19c017cd5f85c206c5d3e409cbf0a5b |
|
BLAKE2b-256 | 5f9d1eb72d999d4f0f144ad72045eb6e9dc39d4d267b436eb1bcd0f0116ee9e4 |
File details
Details for the file funasr_onnx-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: funasr_onnx-0.1.1-py3-none-any.whl
- Upload date:
- Size: 27.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8a4344f056f27a5ca52500f709e1a0a704b73a5de604574bf5494ef302ad1314 |
|
MD5 | 57370e0b00f3d630830cb6f3076ebfe9 |
|
BLAKE2b-256 | 176023831d1063bfdc686538450e7b2a87d7e3bafcd26e02d6a56fa561208961 |