Python module for generating audio with neural networks
Project description
mimikit
Do deep-learning on your own audio files like a pro with just a google account.
mimikit
is a music modelling kit that lets you mimic / transform your own audio files with generative neural-networks.
It contains a collection of models in pytorch and pytorch-lightning as well as powerful ways to :
- prepare & store your data for these models
- train the models online by free gpu providers
- store and track every experiment you make & every sound bits you generate on neptune.ai - also for free
Table of Contents
Installation
mimikit is available as a pip
package. Open a terminal and type :
$ pip install mimikit
Quickstart
If you never did deep-learning before, we recommend you start with the quickest intro to practical deep-learning ever
- Have a google account and register with it to neptune.ai
- Put some audio files in your google drive or make a database on your computer
- Open the FreqNet starter notebook in colab and follow the instructions
For more, check out the mimikit-notebooks, the mmikit docs or the documentation for the freqnet package
Usage
Check out the mimikit-notebooks for client code examples
Documentation
TODO !
Contribute
mimikit
welcomes all kinds of contributions! From bug-fixes to new cool experimental models or improving coverage of tests and docs : get in touch and/or make a pull request.
License
mimikit is distributed under the terms of the GNU General Public License v3.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.