Skip to main content

This package is written for the evaluation of audio generation model.

Project description

Audio Generation Evaluation

This toolbox aims to unify audio generation model evaluation for easier future comparison.

Quick Start

First, prepare the environment

git clone git@github.com:haoheliu/audioldm_eval.git
cd audioldm_eval
pip install -e .

Second, generate test dataset by

python3 gen_test_file.py

Finally, perform a test run. A result for reference is attached here.

python3 test.py # Evaluate and save the json file to disk (example/paired.json)

Evaluation metrics

We have the following metrics in this toolbox:

  • FD: Frechet distance, realized by PANNs, a state-of-the-art audio classification model
  • FAD: Frechet audio distance
  • ISc: Inception score
  • KID: Kernel inception score
  • KL: KL divergence (softmax over logits)
  • KL_Sigmoid: KL divergence (sigmoid over logits)
  • PSNR: Peak signal noise ratio
  • SSIM: Structural similarity index measure
  • LSD: Log-spectral distance

The evaluation function will accept the paths of two folders as main parameters.

  1. If two folder have files with same name and same numbers of files, the evaluation will run in paired mode.
  2. If two folder have different numbers of files or files with different name, the evaluation will run in unpaired mode.

These metrics will only be calculated in paried mode: KL, KL_Sigmoid, PSNR, SSIM, LSD. In the unpaired mode, these metrics will return minus one.

Evaluation on AudioCaps and AudioSet

The AudioCaps test set consists of audio files with multiple text annotations. To evaluate the performance of AudioLDM, we randomly selected one annotation per audio file, which can be found in the accompanying json file.

Given the size of the AudioSet evaluation set with approximately 20,000 audio files, it may be impractical for audio generative models to perform evaluation on the entire set. As a result, we randomly selected 2,000 audio files for evaluation, with the corresponding annotations available in a json file.

For more information on our evaluation process, please refer to our paper.

Example

import torch
from audioldm_eval import EvaluationHelper

# GPU acceleration is preferred
device = torch.device(f"cuda:{0}")

generation_result_path = "example/paired"
target_audio_path = "example/reference"

# Initialize a helper instance
evaluator = EvaluationHelper(16000, device)

# Perform evaluation, result will be print out and saved as json
metrics = evaluator.main(
    generation_result_path,
    target_audio_path,
    limit_num=None # If you only intend to evaluate X (int) pairs of data, set limit_num=X
)

TODO

  • Add pretrained AudioLDM model.
  • Add CLAP score

Cite this repo

If you found this tool useful, please consider citing

@article{liu2023audioldm,
  title={AudioLDM: Text-to-Audio Generation with Latent Diffusion Models},
  author={Liu, Haohe and Chen, Zehua and Yuan, Yi and Mei, Xinhao and Liu, Xubo and Mandic, Danilo and Wang, Wenwu and Plumbley, Mark D},
  journal={arXiv preprint arXiv:2301.12503},
  year={2023}
}

Reference

https://github.com/toshas/torch-fidelity

https://github.com/v-iashin/SpecVQGAN

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

audioldm_eval-0.0.1.tar.gz (53.0 kB view hashes)

Uploaded Source

Built Distribution

audioldm_eval-0.0.1-py3-none-any.whl (61.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page