Deep Learning framework extension allowing more efficient backpropogation of gradient in a situation with branched computational graph structure
Project description
fast-deep-rnn
This is the Course Project for the DeepLearning University Course.
Install
Library can be installed from the PyPI via
pip install fast_deep_rnn
Structure
core
module contains the original core of the framework with Tensor
class implementation and the set of differentiable operations, organized
in Modules.
core_v2
module contains the alternative proposed implementation,
resulting in much faster gradient computing in RNN-s.
Notebook nbs/02_minimal_training.ipynb
contains the simplest example
of model, having exponential growth in original gradient computing, and
benchmarking function to measure this growth. git tags
baseline_benchmark_results
and solution_benchmark_results
contain
corresponding benchmark results inside the notebook.
Notebook nbs/01_lstm_training.ipynb
contains training of LSTM on
number sorting task, which became possible only after the implemented
optimization.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for fast_deep_rnn-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 089c9df5a4b1422238d9585dbb905297e540a142f0e08a4993137fb8d6bfc0e7 |
|
MD5 | 415d016e43ad21589ca28ce24263ffd4 |
|
BLAKE2b-256 | 244b44b7a1fa8dcd781ceb9625cf21ff148b5c3ee8dff9be4495f068d23d3a93 |