Saarbrueken Voice Database Downloader and Reader
Project description
This Python module provides capability to download and organize Saarbrücker Stimmdatenbank (Saarbrücken Voice Database, https://stimmdb.coli.uni-saarland.de/) with SQLAlchemy (sqlalchemy.org).
Features
Auto-download the database file at https://stimmdb.coli.uni-saarland.de
Auto-download the associated datasets from Zonedo: https://zenodo.org/records/16874898
Supports incremental, on-demand download per-pathology
Stores database information as a local SQLite3 file
Database and datasets are accessed via SQLAlchemy ORM (Object Relational Mapper) classes for ease of use
Acoustic and EGG signals can be retrieved as NumPy arrays directly
Supports filters to specify study conditions on pathologies, speaker’s gender and age, recording types, etc.
Fixes known errors in the dataset (i.e., corrupted files and swapping of acoustic/EGG data)
Install
pip install sbvoicedb
If you prefer manually downloading the full dataset from Zonedo (data.zip, the full dataset, 17.9 GB) you may download the file first and unzip the content to a directory. Make sure that the zip file’s internal structure is preserved. If you’re placing your downloaded database in my_svd folder, its directory structure should appear like this:
.../my_svd/
└── data/
├── 1/
│ ├── sentnces
│ │ ├── 1-phrase.nsp
│ │ └── 1-phrase-egg.egg
│ └── vowels
│ ├── 1-a_h.nsp
│ ├── 1-a_h.nsp
│ ⋮
│ └── 1-u_n-egg.egg
├── 2/
│ │ ├── 2-phrase.nsp
│ │ └── 2-phrase-egg.egg
│ └── vowels
│ ├── 2-a_h.nsp
│ ├── 2-a_h.nsp
│ ⋮
│ └── 2-u_n-egg.egg
⋮
Examples
from sbvoicedb import SbVoiceDb
dbpath = '<path to the root directory of the extracted database>'
# to create a database instance
db = SbVoiceDb(dbpath)
# - if no downloaded database data found, it'll automatically download the database (not files)
This creates a new database instance. If dbpath does not contain the SQLite database file, sbvoice.db, it gets populated from the downloaded CSV file.
If any portion of the dataset is already available in data subdirectory, it further populates the recordings table. These database population processes are visualized with progress bars in the console.
By default, no dataset will be downloaded at this point. You can check how much of the datasets are available by
print(f"{db.number_of_sessions_downloaded}/{db.number_of_all_sessions}")
The db.number_of_all_sessions property should always return 2043.
There are 4 tables to the SQLite database: pathologies, speakers, recording_sessions, and recordings. The contents of these tables can be accessed by
db.get_pathology_count()
db.get_speaker_count()
db.get_session_count()
db.get_recording_count()
db.iter_pathologies()
db.iter_speakers()
db.iter_sessions()
db.iter_recordings()
Your study may not require all the recordings. In such case, you can set filters on each table when creating the database object. For example, the following creates a subset of the database which only consists of recordings of sustained /a/ or /i/ at normal pitch, uttered by women of age between 50 and 70 with normal voice or with a diagnosis of Laryngitis:
from sbvoicedb import Pathology, Speaker, RecordingSession, Recording, sql_expr
db_laryngitis = database.SbVoiceDb(
dbdir,
pathology_filter=Pathology.name == "Laryngitis",
include_healthy=True,
speaker_filter=Speaker.gender == "w",
session_filter=RecordingSession.speaker_age.between(50, 70),
recording_filter=Recording.utterance.in_(("a_n", "i_n")),
)
print(f"number of pathologies found: {db_laryngitis.get_pathology_count()}")
print(f"number of recording sessions found: {db_laryngitis.get_session_count()}")
print(f"number of unique speakers: {db_laryngitis.get_speaker_count()}")
print(f"number of recordings: {db_laryngitis.get_recording_count()}")
number of pathologies found: 1
number of recording sessions found: 45
number of unique speakers: 44
number of recordings: 90
You can iterate over the rows of any of the tables:
# iterate over included pathologies
for patho in db_laryngitis.iter_pathologies():
print(f'{patho.id)}: {patho.name} ({patho.downloaded})'
# iterate over included speakers
for speaker in db_laryngitis.iter_speakers():
print(f'{speaker.id)}: {speaker.gender}'
# iterate over included recording sessions
for session in db_laryngitis.iter_sessions():
print(f'{session.id)}: speaker_id={session.speaker_id}, speaker_age={session.speaker_age}, speaker_health={session.type}'
# iterate over included recordings
for rec in db_laryngitis.iter_recordings():
print(f'{rec.id)}: session_id={rec.session_id}, utterance={rec.utterance}, nspfile={rec.nspfile}, eggfile={rec.eggfile}'
To retrieve the acoustic and egg data, use Recording.nspdata and Recording.eggdata:
import numpy as np
from matplotlib import pyplot as plt
rec = next(db_laryngitis.iter_recordings())
t = np.arange(rec.length)/rec.rate
fig, axes = plt.subplots(2, 1, sharex=True)
axes[0].plot(t,rec.nspdata)
axes[0].set_ylabel('acoustic data')
axes[1].plot(t,rec.eggdata)
axes[1].set_ylabel('EGG data')
axes[1].set_xlabel('time (s)')
plt.tight_layout()
plt.show()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sbvoicedb-0.4.0.tar.gz.
File metadata
- Download URL: sbvoicedb-0.4.0.tar.gz
- Upload date:
- Size: 41.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1b2baf1765c8011c5ef3749556a7b8da674060fb2242270d3ab2fa80863be4d4
|
|
| MD5 |
617d0d5b984ed9b7b2926cdd1e74b4d0
|
|
| BLAKE2b-256 |
fca3139865a599d2705014713a57df99251894330a443480bbe9c8d88f6a3215
|
Provenance
The following attestation bundles were made for sbvoicedb-0.4.0.tar.gz:
Publisher:
pub.yml on tikuma-lsuhsc/python-sbvoicedb
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sbvoicedb-0.4.0.tar.gz -
Subject digest:
1b2baf1765c8011c5ef3749556a7b8da674060fb2242270d3ab2fa80863be4d4 - Sigstore transparency entry: 738125683
- Sigstore integration time:
-
Permalink:
tikuma-lsuhsc/python-sbvoicedb@17eec743bfe4a74eed5476b36d26de09f9b1818b -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/tikuma-lsuhsc
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pub.yml@17eec743bfe4a74eed5476b36d26de09f9b1818b -
Trigger Event:
push
-
Statement type:
File details
Details for the file sbvoicedb-0.4.0-py3-none-any.whl.
File metadata
- Download URL: sbvoicedb-0.4.0-py3-none-any.whl
- Upload date:
- Size: 41.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
af7e7ae5a46c79ddaae8ef0588e4f9d8e910eae464eb90e43b40841a6be879c9
|
|
| MD5 |
898e48ae9188089ee7996cfb42e3ca6c
|
|
| BLAKE2b-256 |
33b27d9b520b12832fd87d416a378c285dcc2f7f3fc9625617e7b22bc77ea6f7
|
Provenance
The following attestation bundles were made for sbvoicedb-0.4.0-py3-none-any.whl:
Publisher:
pub.yml on tikuma-lsuhsc/python-sbvoicedb
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sbvoicedb-0.4.0-py3-none-any.whl -
Subject digest:
af7e7ae5a46c79ddaae8ef0588e4f9d8e910eae464eb90e43b40841a6be879c9 - Sigstore transparency entry: 738125687
- Sigstore integration time:
-
Permalink:
tikuma-lsuhsc/python-sbvoicedb@17eec743bfe4a74eed5476b36d26de09f9b1818b -
Branch / Tag:
refs/tags/v0.4.0 - Owner: https://github.com/tikuma-lsuhsc
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
pub.yml@17eec743bfe4a74eed5476b36d26de09f9b1818b -
Trigger Event:
push
-
Statement type: