A biologically plausible vocoder framework for auditory perception modeling and cochlear implant simulation
Project description
🧠 NeuroVoc: From Spikes to Speech 🔊
A biologically plausible vocoder for auditory perception modeling and cochlear implant simulation.
🔍 Overview
NeuroVoc is a flexible, biologically inspired vocoder that reconstructs audio signals from simulated auditory nerve activity. It is designed to support both normal hearing (NH) and electrical hearing (EH) models, allowing for a seamless comparison of auditory perception under different hearing conditions.
🧭 Diagram Explanation
The diagram above illustrates the NeuroVoc processing pipeline:
- Sound — An input waveform (e.g., speech) is passed to an auditory model.
- Hearing Model — This model (e.g., normal hearing or cochlear implant simulation) transforms the sound into a neural representation.
- Neurogram — The output is a time–frequency matrix of spike counts, simulating auditory nerve activity.
- Decoder — The neurogram is then converted back into an acoustic waveform using an inverse short-time Fourier transform (STFT)-based decoder.
This modular flow enables the flexible substitution of different models or model parameters while maintaining a consistent reconstruction backend.
📁 Repository Structure
neurovoc/
├── neurovoc/ # Core vocoder framework (Python package)
├── experiments/ # Scripts for generating the figures from the paper
├── data/ # Din test data and paper data
├── tests/ # Unit tests
The main package can be found in the neurovoc folder. Experiments holds the notebooks that were used to generate the plots in the paper. The online Digits in Noise test platform, can be found in this repository.
You're absolutely right — numbering those steps suggests a sequence, but in this case, they’re two alternative ways to install the package. Here's a better structure:
🚀 Getting Started
📦 Installation Options
🔧 Option 1: Install from Source (Development Version)
If you want to modify or contribute to the codebase:
git clone https://github.com/jacobdenobel/neurovoc.git
cd neurovoc
pip install .
🧪 Option 2: Install from PyPI (Stable Release)
If you just want to use the package:
pip install neurovoc
💻 Command Line Interface (CLI)
NeuroVoc provides a flexible CLI for simulation and vocoding. Once installed, you can use the neurovoc command. If you want to know more about a command, or see which options are available, add the --help flag. For example:
neurovoc generate bruce --help
🔧 Generate a Neurogram
These commands take an audio waveform and convert it into a neurogram (neural spike representation):
neurovoc generate bruce input.wav output.pkl
neurovoc generate specres input.wav output.pkl
neurovoc generate ace input.wav output.pkl
Each model supports its own optional flags, like --n-fibers-per-bin, --n-mels, or --version for ACE.
🎧 Reconstruct Audio from Neurogram
Converts a saved neurogram back into an audio waveform using an inverse STFT-based decoder. Use options like --n-hop, --n-fft, or --target-sr to control reconstruction parameters.
neurovoc reconstruct output.pkl reconstructed.wav
🧪 Full Vocoder Run (Simulate + Reconstruct)
These commands run a full simulation + reconstruction cycle in one go:
neurovoc vocode bruce input.wav output.wav
neurovoc vocode specres input.wav output.wav
neurovoc vocode ace input.wav output.wav
Add --plot to visualize original vs reconstructed signal.
Certainly! Here's the “Processing a Custom Neurogram” section you can directly paste into your README:
🧩 Processing a Custom Neurogram
If you want to apply the NeuroVoc reconstruction logic to a neurogram generated by a method not included in this repository, you can wrap your custom matrix in a Neurogram object:
from neurovoc import Neurogram
neurogram = Neurogram(
dt=..., # float: time resolution of the neurogram (in seconds)
frequencies=..., # np.array (m, 1): frequency bins corresponding to rows
data=..., # np.array (m, t): matrix of normalized spike counts or neural activity
source=..., # str: label describing the source/method
)
📌 Note: Currently, only mel-scale frequency bins are supported for decoding.
Once constructed, save it to disk with:
neurogram.save("my_custom_ng.pkl")
Then reconstruct it using the CLI:
neurovoc reconstruct my_custom_ng.pkl reconstructed.wav
This makes it easy to plug external auditory models into the NeuroVoc decoding pipeline.
🧠 Citation
If you use NeuroVoc in your work, please cite the following:
@misc{denobel2025spikesspeechneurovoc,
title={From Spikes to Speech: NeuroVoc -- A Biologically Plausible Vocoder Framework for Auditory Perception and Cochlear Implant Simulation},
author={Jacob de Nobel and Jeroen J. Briaire and Thomas H. W. Baeck and Anna V. Kononova and Johan H. M. Frijns},
year={2025},
eprint={2506.03959},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2506.03959},
}
📫 Contact
For questions or feedback, contact nobeljpde1@liacs.leidenuniv.nl
Or open an issue in this repository.
🛠 License
This project is licensed under the MIT License. See the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neurovoc-0.1.1.tar.gz.
File metadata
- Download URL: neurovoc-0.1.1.tar.gz
- Upload date:
- Size: 15.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1e82c39a7075b6a205f055f3ce3f7fdf9d4cbee77ca7e940fd85addd948967fa
|
|
| MD5 |
b5a77f2abc8887df056211b4d1f0dd37
|
|
| BLAKE2b-256 |
98ca955603a84f217dae8afc6bbf6c4045862135f62ef1d7760827b0072ef53a
|
Provenance
The following attestation bundles were made for neurovoc-0.1.1.tar.gz:
Publisher:
python-publish.yml on jacobdenobel/NeuroVoc
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neurovoc-0.1.1.tar.gz -
Subject digest:
1e82c39a7075b6a205f055f3ce3f7fdf9d4cbee77ca7e940fd85addd948967fa - Sigstore transparency entry: 325348034
- Sigstore integration time:
-
Permalink:
jacobdenobel/NeuroVoc@fd5a792bb87170fc42784d41443935f1c7f45e5d -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/jacobdenobel
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@fd5a792bb87170fc42784d41443935f1c7f45e5d -
Trigger Event:
release
-
Statement type:
File details
Details for the file neurovoc-0.1.1-py3-none-any.whl.
File metadata
- Download URL: neurovoc-0.1.1-py3-none-any.whl
- Upload date:
- Size: 13.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7696de17783edb8bc11e33097bbbdc3892bb750758c619d279a52a8502ccdca7
|
|
| MD5 |
461b66480b5993a5bc32c110761dc732
|
|
| BLAKE2b-256 |
4521d6bf44817d1c57a547849b42f08b5b26b1c73dacb02af324b15c7fcb611f
|
Provenance
The following attestation bundles were made for neurovoc-0.1.1-py3-none-any.whl:
Publisher:
python-publish.yml on jacobdenobel/NeuroVoc
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
neurovoc-0.1.1-py3-none-any.whl -
Subject digest:
7696de17783edb8bc11e33097bbbdc3892bb750758c619d279a52a8502ccdca7 - Sigstore transparency entry: 325348058
- Sigstore integration time:
-
Permalink:
jacobdenobel/NeuroVoc@fd5a792bb87170fc42784d41443935f1c7f45e5d -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/jacobdenobel
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@fd5a792bb87170fc42784d41443935f1c7f45e5d -
Trigger Event:
release
-
Statement type: