The official Pytorch implementation of Fast Context-based Pitch Estimation (FCPE)
Project description
TorchFCPE
Useage
from torchfcpe import spawn_bundled_infer_model
import torch
import librosa
# configure device and target hop_size
device = 'cpu'
sr = 16000
hop_size = 160
# load audio
audio, sr = librosa.load('test.wav', sr=sr)
audio = librosa.to_mono(audio)
audio_length = len(audio)
f0_target_length=(audio_length // hop_size) + 1
audio = torch.from_numpy(audio).float().unsqueeze(0).unsqueeze(-1).to(device)
# load model
model = spawn_bundled_infer_model(device=device)
# infer
f0 = model.infer(
audio,
sr=sr,
decoder_mode='local_argmax',
threshold=0.006,
f0_min=80,
f0_max=880,
interp_uv=False,
output_interp_target_length=f0_target_length,
)
print(f0)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
torchfcpe-0.0.2-py3-none-any.whl
(40.2 MB
view hashes)
Close
Hashes for torchfcpe-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d188dec17423cf483c4eb435ff721dc7573b82c959d6af6b552cd36332d5a875 |
|
MD5 | cfcf9dc920f8a30c5ce28c669871e2a8 |
|
BLAKE2b-256 | 776689ecba202e243d13650f5a2f4ce1d7f15f071b51a740719d1d8e6972e339 |