Creative Applications of Deep Learning with TensorFlow
Project description
# Introduction
This package is part of the Kadenze Academy program [Creative Applications of Deep Learning w/ TensorFlow](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow).
[COURSE 1: Creative Applications of Deep Learning with TensorFlow I](https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow-iv/info) (Free to Audit)
Session 1: Introduction to TensorFlow
Session 2: Training A Network W/ TensorFlow
Session 3: Unsupervised And Supervised Learning
Session 4: Visualizing And Hallucinating Representations
Session 5: Generative Models
[COURSE 2: Creative Applications of Deep Learning with TensorFlow II](https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow-ii/info) (Program exclusive)
Session 1: Cloud Computing, GPUs, Deploying
Session 2: Mixture Density Networks
Session 3: Modeling Attention with RNNs, DRAW
Session 4: Image-to-Image Translation with GANs
[COURSE 3: Creative Applications of Deep Learning with TensorFlow III](https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow-iii-iii/info) (Program exclusive)
Session 1: Modeling Music and Art: Google Brain’s Magenta Lab
Session 2: Modeling Language: Natural Language Processing
Session 3: Autoregressive Image Modeling w/ PixelCNN
Session 4: Modeling Audio w/ Wavenet and NSynth
# Requirements
Python 3.5+
# Installation
`pip install cadl`
Then in python, you can import any module like so:
`from cadl import vaegan`
Or see a list of possible modules in an interactive console by typing:
`from cadl import ` and then pressing tab to see the list of available modules.
# Documentation
[cadl.readthedocs.io](http://cadl.readthedocs.io)
# Contents
This package contains various models, architectures, and building blocks covered in the Kadenze Academy program including:
* Autoencoders
* Character Level Recurrent Neural Network (CharRNN)
* Conditional Pixel CNN
* CycleGAN
* Deep Convolutional Generative Adversarial Networks (DCGAN)
* Deep Dream
* Deep Recurrent Attentive Writer (DRAW)
* Gated Convolution
* Generative Adversarial Networks (GAN)
* Global Vector Embeddings (GloVe)
* Illustration2Vec
* Inception
* Mixture Density Networks (MDN)
* PixelCNN
* NSynth
* Residual Networks
* Sequence2Seqeuence (Seq2Seq) w/ Attention (both bucketed and dynamic rnn variants available)
* Style Net
* Variational Autoencoders (VAE)
* Variational Autoencoding Generative Adversarial Networks (VAEGAN)
* Video Style Net
* VGG16
* WaveNet / Fast WaveNet Generation w/ Queues / WaveNet Autoencoder (NSynth)
* Word2Vec
and more. It also includes various datasets, preprocessing, batch generators, input pipelines, and plenty more for datasets such as:
* CELEB
* CIFAR
* Cornell
* MNIST
* TedLium
* LibriSpeech
* VCTK
and plenty of utilities for working with images, GIFs, sound (wave) files, MIDI, video, text, TensorFlow, TensorBoard, and their graphs.
Examples of each module's use can be found in the tests folder.
# Contributing
Contributions, such as other model architectures, bug fixes, dataset handling, etc... are welcome and should be filed on the GitHub.
# Troubleshooting
## Error: alsa/asoundlib.h: No such file or directory
```
src/RtMidi.cpp:1101:28: fatal error: alsa/asoundlib.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
```
This is a dependency of `magenta` (`python-rtmidi`) which requires `libasound`.
### Solution: Install ALSA
CentOS:
```
sudo yum install alsa-lib-devel alsa-utils
```
Ubuntu:
```
sudo apt-get install libasound2-dev
```
### More info:
https://python-rtmidi.readthedocs.io/en/latest/installation.html
https://github.com/tensorflow/magenta/issues/781
## Error: jack/jack.h: No such file or directory
```
src/RtMidi.cpp:2448:23: fatal error: jack/jack.h: No such file or directory
compilation terminated.
```
### Solution: Install Jack
Ubuntu:
```
sudo apt install libjack-dev
```
### More info:
https://python-rtmidi.readthedocs.io/en/latest/installation.html
https://github.com/tensorflow/magenta/issues/781
# 1.1.0
* Requirements now points to 1.5.0 TensorFlow
# 1.0.9
* Residual block in CycleGAN was not using first convolutional layer
# 1.0.8
* Batch loading support from magenta repo for FastGen config
# 1.0.7
* NSynth batch processing code from magenta repo
* `get_model` function in `nsynth` module now attempts to download and untar the model from the magenta website.
* `utils.download` functions default to local dir
* Separate encode functionality in nsynth module.
# 1.0.6
* MDN activation fn
# 1.0.5
* Fix gauss pdf
# 1.0.4
* Allow for batch=1 in DRAW code
# 1.0.3
* Add mdn to init
# 1.0.2
* Remove tanh activation from variational layer
* Add librispeech train code to fastwavenet module
* Add Mixture Density Network code from course in mdn module
# 1.0.1
Fixed model loading during charrnn infer method. No longer checks for ckpt name and will attempt to load regardless.
# 1.0.0
Initial release
This package is part of the Kadenze Academy program [Creative Applications of Deep Learning w/ TensorFlow](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow).
[COURSE 1: Creative Applications of Deep Learning with TensorFlow I](https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow-iv/info) (Free to Audit)
Session 1: Introduction to TensorFlow
Session 2: Training A Network W/ TensorFlow
Session 3: Unsupervised And Supervised Learning
Session 4: Visualizing And Hallucinating Representations
Session 5: Generative Models
[COURSE 2: Creative Applications of Deep Learning with TensorFlow II](https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow-ii/info) (Program exclusive)
Session 1: Cloud Computing, GPUs, Deploying
Session 2: Mixture Density Networks
Session 3: Modeling Attention with RNNs, DRAW
Session 4: Image-to-Image Translation with GANs
[COURSE 3: Creative Applications of Deep Learning with TensorFlow III](https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow-iii-iii/info) (Program exclusive)
Session 1: Modeling Music and Art: Google Brain’s Magenta Lab
Session 2: Modeling Language: Natural Language Processing
Session 3: Autoregressive Image Modeling w/ PixelCNN
Session 4: Modeling Audio w/ Wavenet and NSynth
# Requirements
Python 3.5+
# Installation
`pip install cadl`
Then in python, you can import any module like so:
`from cadl import vaegan`
Or see a list of possible modules in an interactive console by typing:
`from cadl import ` and then pressing tab to see the list of available modules.
# Documentation
[cadl.readthedocs.io](http://cadl.readthedocs.io)
# Contents
This package contains various models, architectures, and building blocks covered in the Kadenze Academy program including:
* Autoencoders
* Character Level Recurrent Neural Network (CharRNN)
* Conditional Pixel CNN
* CycleGAN
* Deep Convolutional Generative Adversarial Networks (DCGAN)
* Deep Dream
* Deep Recurrent Attentive Writer (DRAW)
* Gated Convolution
* Generative Adversarial Networks (GAN)
* Global Vector Embeddings (GloVe)
* Illustration2Vec
* Inception
* Mixture Density Networks (MDN)
* PixelCNN
* NSynth
* Residual Networks
* Sequence2Seqeuence (Seq2Seq) w/ Attention (both bucketed and dynamic rnn variants available)
* Style Net
* Variational Autoencoders (VAE)
* Variational Autoencoding Generative Adversarial Networks (VAEGAN)
* Video Style Net
* VGG16
* WaveNet / Fast WaveNet Generation w/ Queues / WaveNet Autoencoder (NSynth)
* Word2Vec
and more. It also includes various datasets, preprocessing, batch generators, input pipelines, and plenty more for datasets such as:
* CELEB
* CIFAR
* Cornell
* MNIST
* TedLium
* LibriSpeech
* VCTK
and plenty of utilities for working with images, GIFs, sound (wave) files, MIDI, video, text, TensorFlow, TensorBoard, and their graphs.
Examples of each module's use can be found in the tests folder.
# Contributing
Contributions, such as other model architectures, bug fixes, dataset handling, etc... are welcome and should be filed on the GitHub.
# Troubleshooting
## Error: alsa/asoundlib.h: No such file or directory
```
src/RtMidi.cpp:1101:28: fatal error: alsa/asoundlib.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
```
This is a dependency of `magenta` (`python-rtmidi`) which requires `libasound`.
### Solution: Install ALSA
CentOS:
```
sudo yum install alsa-lib-devel alsa-utils
```
Ubuntu:
```
sudo apt-get install libasound2-dev
```
### More info:
https://python-rtmidi.readthedocs.io/en/latest/installation.html
https://github.com/tensorflow/magenta/issues/781
## Error: jack/jack.h: No such file or directory
```
src/RtMidi.cpp:2448:23: fatal error: jack/jack.h: No such file or directory
compilation terminated.
```
### Solution: Install Jack
Ubuntu:
```
sudo apt install libjack-dev
```
### More info:
https://python-rtmidi.readthedocs.io/en/latest/installation.html
https://github.com/tensorflow/magenta/issues/781
# 1.1.0
* Requirements now points to 1.5.0 TensorFlow
# 1.0.9
* Residual block in CycleGAN was not using first convolutional layer
# 1.0.8
* Batch loading support from magenta repo for FastGen config
# 1.0.7
* NSynth batch processing code from magenta repo
* `get_model` function in `nsynth` module now attempts to download and untar the model from the magenta website.
* `utils.download` functions default to local dir
* Separate encode functionality in nsynth module.
# 1.0.6
* MDN activation fn
# 1.0.5
* Fix gauss pdf
# 1.0.4
* Allow for batch=1 in DRAW code
# 1.0.3
* Add mdn to init
# 1.0.2
* Remove tanh activation from variational layer
* Add librispeech train code to fastwavenet module
* Add Mixture Density Network code from course in mdn module
# 1.0.1
Fixed model loading during charrnn infer method. No longer checks for ckpt name and will attempt to load regardless.
# 1.0.0
Initial release
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
cadl-1.1.0.tar.gz
(2.6 MB
view details)
Built Distribution
cadl-1.1.0-py3-none-any.whl
(148.4 kB
view details)
File details
Details for the file cadl-1.1.0.tar.gz
.
File metadata
- Download URL: cadl-1.1.0.tar.gz
- Upload date:
- Size: 2.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c1af5bade74e53f6edb0fc917bffa21f4786ecdb654da29a8faa6b62e830a4d2 |
|
MD5 | 72560c8db86613d64079ace3ce9376c5 |
|
BLAKE2b-256 | e04177a562ec4d7d7cc1ba26dcd01eeb6a564e24cbc8f0cee579190b92fd47c0 |
File details
Details for the file cadl-1.1.0-py3-none-any.whl
.
File metadata
- Download URL: cadl-1.1.0-py3-none-any.whl
- Upload date:
- Size: 148.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bf403842ddfbf920fbf4e6299a2b76d9a0569930aaf8dac1527f109739e4603e |
|
MD5 | b6029bcb4d78fa622c7517b03e1d7024 |
|
BLAKE2b-256 | 6aa9fff67bf2f284c85ddd54108e35b2e9db8e6a3deef9a4c083ed73ed11bc37 |