Pytorch implementation of neural homomorphic vocoder
Project description
neural-homomorphic-vocoder
A neural vocoder based on source-filter model called neural homomorphic vocoder
Install
pip install neural-homomorphic-vocoder
Usage
Usage for NeuralHomomorphicVocoder class
- Input
- x: mel-filterbank
- cf0: continuous f0
- uv: u/v symbol
import torch
from nhv import NeuralHomomorphicVocoder
net = NeuralHomomorphicVocoder(
fs=24000, # sampling frequency
fft_size=1024, # size for impuluse responce of LTV
hop_size=256, # hop size in each mel-filterbank frame
in_channels=80, # input channels (i.e., dimension of mel-filterbank)
conv_channels=256, # channel size of LTV filter
ccep_size=222, # output ccep size of LTV filter
out_channels=1, # output size of network
ccep_size=222, # output size of LTV filter
kernel_size=3, # kernel size of LTV filter
dilation_size=1, # dilation size of LTV filter
group_size=8, # group size of LTV filter
fmin=80, # min freq. for melspc
fmax=7600, # max freq. for melspc (recommend to use full-band)
roll_size=24, # frame size to get median to estimate logspc from melspc
n_ltv_layers=3, # # layers for LTV ccep generator
n_postfilter_layers=4, # # layers for output postfilter
n_ltv_postfilter_layers=1, # # layers for LTV postfilter (if ddsconv)
use_causal=False, # use causal conv LTV filter
use_reference_mag=False, # use reference logspc calculated from melspc
use_tanh=False, # apply tanh to output else linear
use_uvmask=False, # apply uv-based mask to harmonic
use_weight_norm=True, # apply weight norm to conv1d layer
conv_type="original" # LTV generator network type ["original", "ddsconv"]
postfilter_type=None, # postfilter network type ["None", "normal", "ddsconv"]
ltv_postfilter_type="conv", # LTV postfilter network type \
# ["None", "normal", "ddsconv"]
ltv_postfilter_kernel_size=1024 # kernel_size for LTV postfilter
scaler_file=None # path to .pkl for internal scaling of melspc
# (dict["mlfb"] = sklearn.preprocessing.StandardScaler)
)
B, T, D = 3, 100, in_channels # batch_size, frame_size, n_mels
z = torch.randn(B, 1, T * hop_size)
x = torch.randn(B, T, D)
cf0 = torch.randn(B, T, 1)
uv = torch.randn(B, T, 1)
y = net(z, torch.cat([x, cf0, uv], dim=-1)) # z: (B, 1, T * hop_size), c: (B, D+2, T)
y = net._forward(z, cf0, uv)
Features
- (2021/05/21): Train using kan-bayashi/ParallelWaveGAN with continuous F1 and uv symbols
- (2021/05/24): Final FIR filter is implemented by 1D causal conv
- (2021/06/17): Implement depth-wise separable convolution
References
@article{liu20,
title={Neural Homomorphic Vocoder},
author={Z.~Liu and K.~Chen and K.~Yu},
journal={Proc. Interspeech 2020},
pages={240--244},
year={2020}
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for neural-homomorphic-vocoder-0.0.10.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | afad2f40a36f928d7f1f778b698845c173d77e3adf29afed4d8e1c4773a2d4cf |
|
MD5 | fb852f730f6c0bc05ce8bc6e1244a9e4 |
|
BLAKE2b-256 | cc554ef7b23af976ab9a03ad6003a88f61b1db4cbb2e83ca0342036562853959 |
Close
Hashes for neural_homomorphic_vocoder-0.0.10-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 32e27a52dcdfbfc9b56eaf0edc5d36efaef0cc50781deaa4762e78dc15b79c42 |
|
MD5 | eb1b9fd3337c87592d640347e06271e9 |
|
BLAKE2b-256 | ac965fd9b5718d181342d58d2a32eff2831729ee16769e3f55cb75301f08e980 |