Skip to main content

Toolkit to work with disordered voice databases

Project description

DiVR (Disordered Voice Recognition) - Benchmark

This repository contains the work that enables working with various disordered voice databases using the divr-diagnosis label standardization toolkit.

Installation

pip install divr-benchmark

How to use

While you can generate your own tasks, we provide a battery of tasks that we have used across a wide range of experiments. You can read more about them in Tasks.

Generating tasks

You can generate new tasks from the databases (AVFAD, MEEI, SVD, Torgo, UASpeech, UncommonVoice, VOICED). Of these SVD, Torgo and VOICED as publicly accessible and scripts can download the data automatically provided the database is still available on the expected URLs.

from divr_diagnosis import diagnosis_maps
from divr_benchmark import Benchmark, Diagnosis

benchmark = Benchmark(
    storage_path="/home/user/divr_benchmark/storage",
    version="v1",
    sample_rate=16000,
)
diag_map = diagnosis_maps.CaRLab_2025()


async def filter_func(database_func: DatabaseFunc):
    # You can filter the data by min_tasks, so thate every speaker has at least N audios
    # this is called 'task' because in most datasets the audios represent different vocal tasks
    db = await database_func(name="svd", min_tasks=None)
    diag_level = diag_map.max_diag_level

    def filter_unclassified(tasks): # example of filtering tasks by label
        # You can also get task.speaker_id which can be used to count
        # number of diag/speaker and restrict which diags are used for the dataset
        return [task for task in tasks if not task.label.incompletely_classified]

    return Dataset(
        train=filter_unclassified(db.all_train(level=diag_level)),
        val=filter_unclassified(db.all_val(level=diag_level)),
        test=filter_unclassified(db.all_test(level=diag_level)),
    )

benchmark.generate_task(
    filter_func=filter_func,
    task_path="/home/user/divr_benchmark/tasks/all",
    diagnosis_map=diag_level,
    allow_incomplete_classification=False,
)

Using existing tasks

Almost all functions of the library accept a level parameter which decides which level of diagnosis is the operation performed on. These parameters default to the maximum diagnostic level if left as None, i.e. the narrowest diagnosis furthest away from the binary detection.

from divr_diagnosis import diagnosis_maps
from divr_benchmark import Benchmark, Diagnosis

benchmark = Benchmark(
    storage_path="/home/user/divr_benchmark/storage",
    version="v1",
    sample_rate=16000,
)
# The diagnosis map here can be different from the one used for generating the tasks
# the library will automatically map diagnosis which can be mapped to the new map
# automatically, and unmapped items will be left as unclassified
diag_map = diagnosis_maps.CaRLab_2025()

task = benchmark.load_task(
    task_path="/home/user/divr_benchmark/tasks/all",
    diag_level=None,
    diagnosis_map=diag_map,
    load_audios=True,
)

# Training at default level of diagnosis
for train_point in task.train:
    point_id = train_point.id
    audio = train_point.audio
    label = task.diag_to_index(
        diag=train_point.label,
        level=None,
    )

# Training at root/0th level of diagnosis. Equivalent to binary detection
for train_point in task.train:
    point_id = train_point.id
    audio = train_point.audio
    label = task.diag_to_index(
        diag=train_point.label,
        level=0,
    )

# Validating
for val_point in task.val:
    point_id = val_point.id
    audio = val_point.audio
    label = task.diag_to_index(
        diag=val_point.label,
        level=None,
    )

# Testing
for test_point in task.test:
    point_id = test_point.id
    audio = test_point.audio
    label = task.diag_to_index(
        diag=test_point.label,
        level=None,
    )

# Class weights for cross entropy loss
class_weights = task.train_class_weights(level=None) # level defaults to max level of label
loss_fn = nn.CrossEntropyLoss(weight=torch.tensor(class_weights))

# Convert predicted index to diagnosis
diagnosis = task.index_to_diag(
    index=index,
    level=None,
)
print(diagnosis.name)

# Get all unique diagnosis in the data
diagnosis_names = task.unique_diagnosis(level=None)

How to cite

Coming soon

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

divr_benchmark-0.1.1.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

divr_benchmark-0.1.1-py3-none-any.whl (1.5 MB view details)

Uploaded Python 3

File details

Details for the file divr_benchmark-0.1.1.tar.gz.

File metadata

  • Download URL: divr_benchmark-0.1.1.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for divr_benchmark-0.1.1.tar.gz
Algorithm Hash digest
SHA256 b2b3e54643e24135eaa2b949d69f9b6227f4371847ea362aebad1038dec9bc05
MD5 d984258bc8f65b87506fb7f37d448bdb
BLAKE2b-256 0622fe280d49b9734eb79149bd70bd29474cb78a2f0820b5ad93299711c92d75

See more details on using hashes here.

Provenance

The following attestation bundles were made for divr_benchmark-0.1.1.tar.gz:

Publisher: release.yml on ComputationalAudioResearchLab/divr-benchmark

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file divr_benchmark-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: divr_benchmark-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for divr_benchmark-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 abb3a9878b62a0bd55e24cdd572ec1ad191ddae8a26438424f79be7f37a3d2e2
MD5 89f28799de6e5b73bcf3405f35a31d6b
BLAKE2b-256 13ae4954112145ee54ec48ebc1efe3983410d455f088655505f2f5f3150c0802

See more details on using hashes here.

Provenance

The following attestation bundles were made for divr_benchmark-0.1.1-py3-none-any.whl:

Publisher: release.yml on ComputationalAudioResearchLab/divr-benchmark

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page