This package is written for text-to-audio generation.
Project description
Text-to-Audio Generation
Generate speech, sound effects, music and beyond.
- Prepare running environment
# Optional
conda create -n audioldm python=3.8; conda activate audioldm
# Install AudioLDM
pip3 install audioldm
- text-to-audio generation
# Test run
audioldm -t "A hammer is hitting a wooden surface"
For more options on guidance scale, batchsize, seed, etc, please run
audioldm -h
For the evaluation of audio generative model, please refer to audioldm_eval.
Web Demo
Integrated into Hugging Face Spaces 🤗 using Gradio. Try out the Web Demo
TODO
- Update the checkpoint with more training steps.
- Add AudioCaps finetuned AudioLDM-S model
- Build pip installable package for commandline use
- Add text-guided style transfer
- Add audio super-resolution
- Add audio inpainting
Cite this work
If you found this tool useful, please consider citing
@article{liu2023audioldm,
title={AudioLDM: Text-to-Audio Generation with Latent Diffusion Models},
author={Liu, Haohe and Chen, Zehua and Yuan, Yi and Mei, Xinhao and Liu, Xubo and Mandic, Danilo and Wang, Wenwu and Plumbley, Mark D},
journal={arXiv preprint arXiv:2301.12503},
year={2023}
}
Hardware requirement
- GPU with 8GB of dedicated VRAM
- A system with a 64-bit operating system (Windows 7, 8.1 or 10, Ubuntu 16.04 or later, or macOS 10.13 or later) 16GB or more of system RAM
Reference
Part of the code is borrowed from the following repos. We would like to thank the authors of these repos for their contribution.
We build the model with data from AudioSet, Freesound and BBC Sound Effect library. We share this demo based on the UK copyright exception of data for academic research.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.