Skip to main content

Keras implementation of Legendre Memory Units

Project description

KerasLMU: Recurrent neural networks using Legendre Memory Units

Paper

This is a Keras-based implementation of the Legendre Memory Unit (LMU). The LMU is a novel memory cell for recurrent neural networks that dynamically maintains information across long windows of time using relatively few resources. It has been shown to perform as well as standard LSTM or other RNN-based models in a variety of tasks, generally with fewer internal parameters (see this paper for more details). For the Permuted Sequential MNIST (psMNIST) task in particular, it has been demonstrated to outperform the current state-of-the-art results. See the note below for instructions on how to get access to this model.

The LMU is mathematically derived to orthogonalize its continuous-time history – doing so by solving d coupled ordinary differential equations (ODEs), whose phase space linearly maps onto sliding windows of time via the Legendre polynomials up to degree d − 1 (the example for d = 12 is shown below).

Legendre polynomials

A single LMU cell expresses the following computational graph, which takes in an input signal, x, and couples a optimal linear memory, m, with a nonlinear hidden state, h. By default, this coupling is trained via backpropagation, while the dynamics of the memory remain fixed.

Computational graph

The discretized A and B matrices are initialized according to the LMU’s mathematical derivation with respect to some chosen window length, θ. Backpropagation can be used to learn this time-scale, or fine-tune A and B, if necessary.

Both the kernels, W, and the encoders, e, are learned. Intuitively, the kernels learn to compute nonlinear functions across the memory, while the encoders learn to project the relevant information into the memory (see paper for details).

Nengo Examples

Citation

@inproceedings{voelker2019lmu,
  title={Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks},
  author={Aaron R. Voelker and Ivana Kaji\'c and Chris Eliasmith},
  booktitle={Advances in Neural Information Processing Systems},
  pages={15544--15553},
  year={2019}
}

Release history

0.7.0 (July 20, 2023)

Compatible with TensorFlow 2.4 - 2.13

Changed

  • Minimum supported Python version is now 3.8 (3.7 reached end of life in June 2023). (#54)

0.6.0 (May 5, 2023)

Compatible with TensorFlow 2.4 - 2.11

Changed

  • LMUFeedforward can now be used with unknown sequence lengths, and LMU will use LMUFeedforward for unknown sequence lengths (as long as the other conditions are met, as before). (#52)

  • Allow input_to_hidden=True with hidden_cell=None. This will act as a skip connection. (#52)

  • Changed order of LMU states so that the LMU memory state always comes first, and any states from the hidden cell come afterwards. (#52)

Fixed

  • Fixed errors when setting non-default dtype on LMU layers. (#52)

0.5.0 (January 26, 2023)

Compatible with TensorFlow 2.4 - 2.11

Added

  • Layers are registered with the Keras serialization system (no longer need to be passed as custom_objects). (#49)

0.4.2 (May 17, 2022)

Compatible with TensorFlow 2.1 - 2.9

Added

  • Added support for TensorFlow 2.9. (#48)

0.4.1 (February 10, 2022)

Compatible with TensorFlow 2.1 - 2.8

Added

  • Added support for TensorFlow 2.8. (#46)

  • Allow for optional bias on the memory component with the use_bias flag. (#44)

  • Added regularizer support for kernel, recurrent kernel, and bias. (#44)

0.4.0 (August 16, 2021)

Compatible with TensorFlow 2.1 - 2.7

Added

  • Setting kernel_initializer=None now removes the dense input kernel. (#40)

  • The keras_lmu.LMUFFT layer now supports memory_d > 1. keras_lmu.LMU now uses this implementation for all values of memory_d when feedforward conditions are satisfied (no hidden-to-memory or memory-to-memory connections, and the sequence length is not None). (#40)

  • Added trainable_theta option, which will allow the theta parameter to be learned during training. (#41)

  • Added discretizer option, which controls the method used to solve for the A and B LMU matrices. This is mainly useful in combination with trainable_theta=True, where setting discretizer="euler" may improve the training speed (possibly at the cost of some accuracy). (#41)

  • The keras_lmu.LMUFFT layer can now use raw convolution internally (as opposed to FFT-based convolution). The new conv_mode option exposes this. The new truncate_ir option allows truncating the impulse response when running with a raw convolution mode, for efficiency. Whether FFT-based or raw convolution is faster depends on the specific model, hardware, and amount of truncation. (#42)

Changed

  • The A and B matrices are now stored as constants instead of non-trainable variables. This can improve the training/inference speed, but it means that saved weights from previous versions will be incompatible. (#41)

  • Renamed keras_lmu.LMUFFT to keras_lmu.LMUFeedforward. (#42)

Fixed

  • Fixed dropout support in TensorFlow 2.6. (#42)

0.3.1 (November 16, 2020)

Changed

  • Raise a validation error if hidden_to_memory or input_to_hidden are True when hidden_cell=None. (#26)

Fixed

  • Fixed a bug with the autoswapping in keras_lmu.LMU during training. (#28)

  • Fixed a bug where dropout mask was not being reset properly in the hidden cell. (#29)

0.3.0 (November 6, 2020)

Changed

  • Renamed module from lmu to keras_lmu (so it will now be imported via import keras_lmu), renamed package from lmu to keras-lmu (so it will now be installed via pip install keras-lmu), and changed any references to “NengoLMU” to “KerasLMU” (since this implementation is based in the Keras framework rather than Nengo). In the future the lmu namespace will be used as a meta-package to encapsulate LMU implementations in different frameworks. (#24)

0.2.0 (November 2, 2020)

Added

  • Added documentation for package description, installation, usage, API, examples, and project information. (#20)

  • Added LMU FFT cell variant and auto-switching LMU class. (#21)

  • LMUs can now be used with any Keras RNN cell (e.g. LSTMs or GRUs) through the hidden_cell parameter. This can take an RNN cell (like tf.keras.layers.SimpleRNNCell or tf.keras.layers.LSTMCell) or a feedforward layer (like tf.keras.layers.Dense) or None (to create a memory-only LMU). The output of the LMU memory component will be fed to the hidden_cell. (#22)

  • Added hidden_to_memory, memory_to_memory, and input_to_hidden parameters to LMUCell, which can be used to enable/disable connections between components of the LMU. They default to disabled. (#22)

  • LMUs can now be used with multi-dimensional memory components. This is controlled through a new memory_d parameter of LMUCell. (#22)

  • Added dropout parameter to LMUCell (which applies dropout to the input) and recurrent_dropout (which applies dropout to the memory_to_memory connection, if it is enabled). Note that dropout can be added in the hidden component through the hidden_cell object. (#22)

Changed

  • Renamed lmu.lmu module to lmu.layers. (#22)

  • Combined the *_encoders_initializer``parameters of ``LMUCell into a single kernel_initializer parameter. (#22)

  • Combined the *_kernel_initializer parameters of LMUCell into a single recurrent_kernel_initializer parameter. (#22)

Removed

  • Removed Legendre, InputScaled, LMUCellODE, and LMUCellGating classes. (#22)

  • Removed the method, realizer, and factory arguments from LMUCell (they will take on the same default values as before, they just cannot be changed). (#22)

  • Removed the trainable_* arguments from LMUCell. This functionality is largely redundant with the new functionality added for enabling/disabling internal LMU connections. These were primarily used previously for e.g. setting a connection to zero and then disabling learning, which can now be done more efficiently by disabling the connection entirely. (#22)

  • Removed the units and hidden_activation parameters of LMUCell (these are now specified directly in the hidden_cell. (#22)

  • Removed the dependency on nengolib. (#22)

  • Dropped support for Python 3.5, which reached its end of life in September 2020. (#22)

0.1.0 (June 22, 2020)

Initial release of KerasLMU 0.1.0! Supports Python 3.5+.

The API is considered unstable; parts are likely to change in the future.

Thanks to all of the contributors for making this possible!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

keras-lmu-0.7.0.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

keras_lmu-0.7.0-py3-none-any.whl (22.4 kB view details)

Uploaded Python 3

File details

Details for the file keras-lmu-0.7.0.tar.gz.

File metadata

  • Download URL: keras-lmu-0.7.0.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for keras-lmu-0.7.0.tar.gz
Algorithm Hash digest
SHA256 fa18b4e943ef74f11adec8fb6215be74083cc042937ba13577cae49d17ec4699
MD5 ebf173f502437f0bac8e01b0965edc1e
BLAKE2b-256 6632662edcb42de4e721b22cdefb47f7fe70a4df8bb88fcab1cbaeffdde424cf

See more details on using hashes here.

File details

Details for the file keras_lmu-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: keras_lmu-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 22.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for keras_lmu-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 44cc71a341685feb7918870a838a81a926effa6ec1dd4176ebb0f736992b7278
MD5 b5c4c708de3aae15b5d213cf3f0fcbc7
BLAKE2b-256 c952e1cc8325e543f826993fca5aa7e6a468ace3864259a252714511c23b8287

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page