Skip to main content

Toolkit to work with disordered voice databases

Project description

DiVR (Disordered Voice Recognition) - Benchmark

This repository contains the work that enables working with various disordered voice databases using the divr-diagnosis label standardization toolkit.

Installation

pip install divr-benchmark

How to use

While you can generate your own tasks, we provide a battery of tasks that we have used across a wide range of experiments. You can read more about them in Tasks.

Generating tasks

You can generate new tasks from the databases (AVFAD, MEEI, SVD, Torgo, UASpeech, UncommonVoice, VOICED). Of these SVD, Torgo and VOICED as publicly accessible and scripts can download the data automatically provided the database is still available on the expected URLs.

from divr_diagnosis import diagnosis_maps
from divr_benchmark import Benchmark, Diagnosis

benchmark = Benchmark(
    storage_path="/home/user/divr_benchmark/storage",
    version="v1",
    sample_rate=16000,
)
diag_map = diagnosis_maps.CaRLab_2025()


async def filter_func(database_func: DatabaseFunc):
    # You can filter the data by min_tasks, so thate every speaker has at least N audios
    # this is called 'task' because in most datasets the audios represent different vocal tasks
    db = await database_func(name="svd", min_tasks=None)
    diag_level = diag_map.max_diag_level

    def filter_unclassified(tasks): # example of filtering tasks by label
        # You can also get task.speaker_id which can be used to count
        # number of diag/speaker and restrict which diags are used for the dataset
        return [task for task in tasks if not task.label.incompletely_classified]

    return Dataset(
        train=filter_unclassified(db.all_train(level=diag_level)),
        val=filter_unclassified(db.all_val(level=diag_level)),
        test=filter_unclassified(db.all_test(level=diag_level)),
    )

benchmark.generate_task(
    filter_func=filter_func,
    task_path="/home/user/divr_benchmark/tasks/all",
    diagnosis_map=diag_level,
    allow_incomplete_classification=False,
)

Using existing tasks

Almost all functions of the library accept a level parameter which decides which level of diagnosis is the operation performed on. These parameters default to the maximum diagnostic level if left as None, i.e. the narrowest diagnosis furthest away from the binary detection.

from divr_diagnosis import diagnosis_maps
from divr_benchmark import Benchmark, Diagnosis

benchmark = Benchmark(
    storage_path="/home/user/divr_benchmark/storage",
    version="v1",
    sample_rate=16000,
)
# The diagnosis map here can be different from the one used for generating the tasks
# the library will automatically map diagnosis which can be mapped to the new map
# automatically, and unmapped items will be left as unclassified
diag_map = diagnosis_maps.CaRLab_2025()

task = benchmark.load_task(
    task_path="/home/user/divr_benchmark/tasks/all",
    diag_level=None,
    diagnosis_map=diag_map,
    load_audios=True,
)

# Training at default level of diagnosis
for train_point in task.train:
    point_id = train_point.id
    audio = train_point.audio
    label = task.diag_to_index(
        diag=train_point.label,
        level=None,
    )

# Training at root/0th level of diagnosis. Equivalent to binary detection
for train_point in task.train:
    point_id = train_point.id
    audio = train_point.audio
    label = task.diag_to_index(
        diag=train_point.label,
        level=0,
    )

# Validating
for val_point in task.val:
    point_id = val_point.id
    audio = val_point.audio
    label = task.diag_to_index(
        diag=val_point.label,
        level=None,
    )

# Testing
for test_point in task.test:
    point_id = test_point.id
    audio = test_point.audio
    label = task.diag_to_index(
        diag=test_point.label,
        level=None,
    )

# Class weights for cross entropy loss
class_weights = task.train_class_weights(level=None) # level defaults to max level of label
loss_fn = nn.CrossEntropyLoss(weight=torch.tensor(class_weights))

# Convert predicted index to diagnosis
diagnosis = task.index_to_diag(
    index=index,
    level=None,
)
print(diagnosis.name)

# Get all unique diagnosis in the data
diagnosis_names = task.unique_diagnosis(level=None)

How to cite

Coming soon

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

divr_benchmark-0.1.0.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

divr_benchmark-0.1.0-py3-none-any.whl (1.5 MB view details)

Uploaded Python 3

File details

Details for the file divr_benchmark-0.1.0.tar.gz.

File metadata

  • Download URL: divr_benchmark-0.1.0.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for divr_benchmark-0.1.0.tar.gz
Algorithm Hash digest
SHA256 986c5bd817753f1a2ab6d184dfff68e2fa69ad8d7e1ad1b8ceeab41358fa2838
MD5 d2663e55e05da156487d6557e2cc1396
BLAKE2b-256 729a151b916e03e9b5f921eef1497f219e72b9ecdbaf0723c278add975935e35

See more details on using hashes here.

Provenance

The following attestation bundles were made for divr_benchmark-0.1.0.tar.gz:

Publisher: release.yml on ComputationalAudioResearchLab/divr-benchmark

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file divr_benchmark-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: divr_benchmark-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for divr_benchmark-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2f1eefc97bb9b1a6191b0c59ba6d71d8e9bcb71b1f4b11f5efef8b6ebda1bd87
MD5 559345b3dca9950062d19034d0b87982
BLAKE2b-256 fb7a0ba8c40cbf850752a1b15fe7a3fb52e3bc40dad4a42bf15911c61065e21e

See more details on using hashes here.

Provenance

The following attestation bundles were made for divr_benchmark-0.1.0-py3-none-any.whl:

Publisher: release.yml on ComputationalAudioResearchLab/divr-benchmark

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page