Spiking neuron integration for Keras
KerasSpiking provides tools for training and running spiking neural networks directly within the Keras framework. The main feature is keras_spiking.SpikingActivation, which can be used to transform any activation function into a spiking equivalent. For example, we can translate a non-spiking model, such as
inp = tf.keras.Input((5,)) dense = tf.keras.layers.Dense(10)(inp) act = tf.keras.layers.Activation("relu")(dense) model = tf.keras.Model(inp, act)
into the spiking equivalent:
# add time dimension to inputs inp = tf.keras.Input((None, 5)) dense = tf.keras.layers.Dense(10)(inp) # replace Activation with SpikingActivation act = keras_spiking.SpikingActivation("relu")(dense) model = tf.keras.Model(inp, act)
Models with SpikingActivation layers can be optimized and evaluated in the same way as any other Keras model. They will automatically take advantage of KerasSpiking’s “spiking aware training”: using the spiking activations on the forward pass and the non-spiking (differentiable) activation function on the backwards pass.
KerasSpiking also includes various tools to assist in the training of spiking models, such as additional regularizers and filtering layers.
If you are interested in building and optimizing spiking neuron models, you may also be interested in NengoDL. See this page for a comparison of the different use cases supported by these two packages.
Check out the documentation for
More detailed example introducing the features of KerasSpiking
0.3.0 (November 8, 2021)
Compatible with TensorFlow 2.1.0 - 2.7.0
LowpassCell, Lowpass, AlphaCell, and Alpha layers now accept both initial_level_constraint and tau_constraint to customize how their respective parameters are constrained during training. (#21)
The tau time constants for LowpassCell, Lowpass, AlphaCell, and Alpha are now always clipped to be positive in the forward pass rather than constraining the underlying trainable weights in between gradient updates. (#21)
Renamed the Lowpass/Alpha tau parameter to tau_initializer, and it now accepts tf.keras.initializers.Initializer objects (in addition to floats, as before). Renamed the tau_var weight attribute to tau. (#21)
SpikingActivation, Lowpass, and Alpha layers will now correctly use keras_spiking.default.dt. (#20)
0.2.0 (February 18, 2021)
Compatible with TensorFlow 2.1.0 - 2.4.0
Added the keras_spiking.Alpha filter, which provides second-order lowpass filtering for better noise removal for spiking layers. (#4)
Added keras_spiking.callbacks.DtScheduler, which can be used to update layer dt parameters during training. (#5)
Added keras_spiking.default.dt, which can be used to set the default dt for all layers that don’t directly specify dt. (#5)
Added keras_spiking.regularizers.RangedRegularizer, which can be used to apply some other regularizer (e.g. tf.keras.regularizers.L2) with respect to some non-zero target point, or a range of acceptable values. This functionality has also been added to keras_spiking.regularizers.L1L2/L1/L2 (so they can now be applied with respect to a single reference point or a range). (#6)
Added keras_spiking.regularizers.Percentile which computes a percentile across a number of examples, and regularize that statistic. (#6)
Added keras_spiking.ModelEnergy to estimate energy usage for Keras Models. (#7)
keras_spiking.SpikingActivation and keras_spiking.Lowpass now return sequences by default. This means that these layers will now have outputs that have the same number of timesteps as their inputs. This makes it easier to process create multi-layer spiking networks, where time is preserved throughout the network. The spiking fashion-MNIST example has been updated accordingly. (#3)
Layers now support multi-dimensional inputs (e.g., output of Conv2D layers). (#5)
0.1.0 (August 14, 2020)
Compatible with TensorFlow 2.1.0 - 2.3.0
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for keras_spiking-0.3.0-py3-none-any.whl