Unified automatic quality assessment for speech, music, and sound.
Project description
audiobox-aesthetics
Unified automatic quality assessment for speech, music, and sound.
- Paper arXiv / MetaAI.
- Blogpost ai.meta.com
Installation
- Install via pip
pip install audiobox_aesthetics
- Install directly from source
This repository requires Python 3.9 and Pytorch 2.2 or greater. To install, you can clone this repo and run:
pip install -e .
Pre-trained Models
| Model | S3 | HuggingFace |
|---|---|---|
| All axes | checkpoint.pt | HF Repo |
Usage
How to run prediction using CLI:
- Create a jsonl files with the following format
{"path":"/path/to/a.wav"}
{"path":"/path/to/b.flac"}
...
{"path":"/path/to/z.wav"}
or if you only want to predict aesthetic scores from certain timestamp
{"path":"/path/to/a.wav", "start_time":0, "end_time": 5}
{"path":"/path/to/b.flac", "start_time":3, "end_time": 10}
and save it as input.jsonl
- Run following command
audio-aes input.jsonl --batch-size 100 > output.jsonl
If you haven't downloade the checkpoint, the script will try to download it automatically. Otherwise, you can provide the path by --ckpt /path/to/checkpoint.pt
If you have SLURM, run the following command
audio-aes input.jsonl --batch-size 100 --remote --array 5 --job-dir $HOME/slurm_logs/ --chunk 1000 > output.jsonl
Please adjust CPU & GPU settings using --slurm-gpu, --slurm-cpu depending on your nodes.
- Output file will contain the same number of rows as
input.jsonl. Each row contains 4 axes of prediction with a JSON-formatted dictionary. Check the following table for more info:
| Axes name | Full name |
|---|---|
| CE | Content Enjoyment |
| CU | Content Usefulness |
| PC | Production Complexity |
| PQ | Production Quality |
Output line example:
{"CE": 5.146, "CU": 5.779, "PC": 2.148, "PQ": 7.220}
-
(Extra) If you want to extract only one axis (i.e. CE), post-process the output file with the following command using
jqutility:jq '.CE' output.jsonl > output-aes_ce.txt
How to run prediction from Python script or interpreter
- Infer from file path
from audiobox_aesthetics.infer import initialize_predictor
predictor = initialize_predictor()
predictor.forward([{"path":"/path/to/a.wav"}, {"path":"/path/to/b.flac"}])
- Infer from torch tensor
from audiobox_aesthetics.infer import initialize_predictor
predictor = initialize_predictor()
wav, sr = torchaudio.load("/path/to/a.wav")
predictor.forward([{"path":wav, "sample_rate": sr}])
Evaluation dataset
We released our evaluation dataset consisting of 4 axes of aesthetic annotation scores.
Here, we show an example of how to read and re-map each annotation to the actual audio file.
{
"data_path": "/your_path/LibriTTS/train-clean-100/1363/139304/1363_139304_000011_000000.wav",
"Production_Quality": [8.0, 8.0, 8.0, 8.0, 8.0, 9.0, 8.0, 5.0, 8.0, 8.0],
"Production_Complexity": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
"Content_Enjoyment": [8.0, 6.0, 8.0, 5.0, 8.0, 8.0, 8.0, 6.0, 8.0, 6.0],
"Content_Usefulness": [8.0, 6.0, 8.0, 7.0, 8.0, 9.0, 8.0, 6.0, 10.0, 7.0]
}
- Recognize the dataset name from data_path. In the example, it is LibriTTS.
- Replace "/your_path/" into your downloaded LibriTTS directory.
- Each axis contains 10 scores annotated by 10 different human annotators.
| data_path | URL |
|---|---|
| LibriTTS | https://openslr.org/60/ |
| cv-corpus-13.0-2023-03-09 | https://commonvoice.mozilla.org/en/datasets |
| EARS | https://sp-uhh.github.io/ears_dataset/ |
| MUSDB18 | https://sigsep.github.io/datasets/musdb.html |
| musiccaps | https://www.kaggle.com/datasets/googleai/musiccaps |
| (audioset) unbalanced_train_segments | https://research.google.com/audioset/dataset/index.html |
| PAM | https://zenodo.org/records/10737388 |
License
The majority of audiobox-aesthetics is licensed under CC-BY 4.0, as found in the LICENSE file. However, portions of the project are available under separate license terms: https://github.com/microsoft/unilm is licensed under MIT license.
Citation
If you found this repository useful, please cite the following BibTeX entry.
@article{tjandra2025aes,
title={Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound},
author={Andros Tjandra and Yi-Chiao Wu and Baishan Guo and John Hoffman and Brian Ellis and Apoorv Vyas and Bowen Shi and Sanyuan Chen and Matt Le and Nick Zacharov and Carleigh Wood and Ann Lee and Wei-Ning Hsu},
year={2025},
url={https://arxiv.org/abs/2502.05139}
}
Acknowledgements
Part of the model code is copied from https://github.com/microsoft/unilm/tree/master/wavlm.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file audiobox_aesthetics-0.0.3.tar.gz.
File metadata
- Download URL: audiobox_aesthetics-0.0.3.tar.gz
- Upload date:
- Size: 39.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4314cd5a9dfab3772e5c0d6208cf622165ef4ae40d1440e1a06bc591ce79f1f2
|
|
| MD5 |
748a6cdd5b890e362c19997da47c5b83
|
|
| BLAKE2b-256 |
2bd7d6f8c2f5357d0c563dfb2ad61d59de6e90d861ca986b61b3cd0dd540d186
|
File details
Details for the file audiobox_aesthetics-0.0.3-py3-none-any.whl.
File metadata
- Download URL: audiobox_aesthetics-0.0.3-py3-none-any.whl
- Upload date:
- Size: 37.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c6b67f63394195eb5a1ee19e8a6e421dceb8c208e394f64a47cf47cf036b4f0e
|
|
| MD5 |
81ea89991c31e208c3c31f1598d7e94c
|
|
| BLAKE2b-256 |
774d3640e9e50a7f3a6b98ce40d9bbc9018b8b08973237e50493897946532b9d
|