Skip to main content

Deep learning integration for Nengo

Project description

Latest PyPI version Travis-CI build status AppVeyor build status Test coverage

NengoDL: Deep learning integration for Nengo

NengoDL is a simulator for Nengo models. That means it takes a Nengo network as input, and allows the user to simulate that network using some underlying computational framework (in this case, TensorFlow).

In practice, what that means is that the code for constructing a Nengo model is exactly the same as it would be for the standard Nengo simulator. All that changes is that we use a different Simulator class to execute the model.

For example:

import nengo
import nengo_dl
import numpy as np

with nengo.Network() as net:
    inp = nengo.Node(output=np.sin)
    ens = nengo.Ensemble(50, 1, neuron_type=nengo.LIF())
    nengo.Connection(inp, ens, synapse=0.1)
    p = nengo.Probe(ens)

with nengo_dl.Simulator(net) as sim: # this is the only line that changes
    sim.run(1.0)

print(sim.data[p])

However, NengoDL is not simply a duplicate of the Nengo simulator. It also adds a number of unique features, such as:

  • optimizing the parameters of a model through deep learning training methods

  • faster simulation speed, on both CPU and GPU

  • inserting networks defined using TensorFlow (such as convolutional neural networks) directly into a Nengo model

More details can be found in the NengoDL documentation.

Installation

Installation instructions can be found here.

Release History

0.5.2 (October 11, 2017)

Added

  • TensorNode outputs can now define a post_build function that will be executed after the simulation is initialized (see the TensorNode documentation for details).

  • Added functionality for outputting summary data during the training process that can be viewed in TensorBoard (see the sim.train documentation).

  • Added some examples demonstrating how to use Nengo DL in a more complicated task using semantic pointers to encode/retrieve information

  • Added sim.training_step variable which will track the current training iteration (can be used, e.g., for TensorFlow’s variable learning rate operations).

  • Users can manually create tf.summary ops and pass them to sim.train summaries

  • The Simulator context will now also set the default TensorFlow graph to the one associated with the Simulator (so any TensorFlow ops created within the Simulator context will automatically be added to the correct graph)

  • Users can now specify a different objective for each output probe during training/loss calculation (see the sim.train documentation).

Changed

  • Resetting the simulator now only rebuilds the necessary components in the graph (as opposed to rebuilding the whole graph)

  • The default "mse" loss implementation will now automatically convert np.nan values in the target to zero error

  • If there are multiple target probes given to sim.train/sim.loss the total error will now be summed across probes (instead of averaged)

Fixed

  • sim.data now implements the full collections.Mapping interface

  • Fixed bug where signal order was non-deterministic for Networks containing objects with duplicate names (#9)

  • Fixed bug where non-slot optimizer variables were not initialized (#11)

  • Implemented a modified PES builder in order to avoid slicing encoders on non-decoded PES connections

  • TensorBoard output directory will be automatically created if it doesn’t exist

0.5.1 (August 28, 2017)

Changed

  • sim.data[obj] will now return live parameter values from the simulation, rather than initial values from the build process. That means that it can be used to get the values of object parameters after training, e.g. sim.data[my_conn].weights.

  • Increased minimum Nengo version to 2.5.0.

  • Increased minimum TensorFlow version to 1.3.0.

0.5.0 (July 11, 2017)

Added

  • Added nengo_dl.tensor_layer to help with the construction of layer-style TensorNodes (see the TensorNode documentation)

  • Added an example demonstrating how to train a neural network that can run in spiking neurons

  • Added some distributions for weight initialization to nengo_dl.dists

  • Added sim.train(..., profile=True) option to collect profiling information during training

  • Added new methods to simplify the Nengo operation graph, resulting in faster simulation/training speed

  • The default graph planner can now be modified by setting the planner attribute on the top-level Network config

  • Added TensorFlow implementation for general linear synapses

  • Added backports.tempfile and backports.print_function requirement for Python 2.7 systems

Changed

  • Increased minimum TensorFlow version to 1.2.0

  • Improved error checking for input/target data

  • Improved efficiency of stateful gradient operations, resulting in faster training speed

  • The functionality for nengo_dl.configure_trainable has been subsumed into the more general nengo_dl.configure_settings(trainable=x). This has resulted in some small changes to how trainability is controlled within subnetworks; see the updated documentation for details.

  • Calling Simulator.train/Simulator.loss no longer resets the internal state of the simulation (so they can be safely intermixed with calls to Simulator.run)

Deprecated

  • The old step_blocks/unroll_simulation syntax has been fully deprecated, and will result in errors if used

Fixed

  • Fixed bug related to changing the output of a Node after the model is constructed (#4)

  • Order of variable creation is now deterministic (helps make saving/loading parameters more reliable)

  • Configuring whether or not a model element is trainable does not affect whether or not that element is minibatched

  • Correctly reuse variables created inside a TensorNode when unroll_simulation > 1

  • Correctly handle probes that aren’t connected to any ops

  • Swapped fan_in/fan_out in dists.VarianceScaling to align with the standard definitions

  • Temporary patch to fix memory leak in TensorFlow (see #11273)

  • Fixed bug related to nodes that had matching output functions but different size_out

  • Fixed bug related to probes that do not contain any data yet

0.4.0 (June 8, 2017)

Added

Changed

  • Updated TensorFuncParam to new Nengo Param syntax

  • The interface for Simulator step_blocks/unroll_simulation has been changed. Now unroll_simulation takes an integer as argument which is equivalent to the old step_blocks value, and unroll_simulation=1 is equivalent to the old unroll_simulation=False. For example, Simulator(..., unroll_simulation=True, step_blocks=10) is now equivalent to Simulator(..., unroll_simulation=10).

  • Simulator.train/Simulator.loss no longer require step_blocks (or the new unroll_simulation) to be specified; the number of steps to train across will now be inferred from the input data.

0.3.1 (May 12, 2017)

Added

  • Added more documentation on Simulator arguments

Changed

  • Improved efficiency of tree_planner, made it the new default planner

Fixed

  • Correctly handle input feeds when n_steps > step_blocks

  • Detect cycles in transitive planner

  • Fix bug in uneven step_blocks rounding

  • Fix bug in Simulator.print_params

  • Fix bug related to merging of learning rule with different dimensionality

  • Use tf.Session instead of tf.InteractiveSession, to avoid strange side effects if the simulator isn’t closed properly

0.3.0 (April 25, 2017)

Added

  • Use logger for debug/builder output

  • Implemented TensorFlow gradients for sparse Variable update Ops, to allow models with those elements to be trained

  • Added tutorial/examples on using Simulator.train

  • Added support for training models when unroll_simulation=False

  • Compatibility changes for Nengo 2.4.0

  • Added a new graph planner algorithm, which can improve simulation speed at the cost of build time

Changed

  • Significant improvements to simulation speed

    • Use sparse Variable updates for signals.scatter/gather

    • Improved graph optimizer memory organization

    • Implemented sparse matrix multiplication op, to allow more aggressive merging of DotInc operators

  • Significant improvements to build speed

    • Added early termination to graph optimization

    • Algorithmic improvements to graph optimization functions

  • Reorganized documentation to more clearly direct new users to relevant material

Fixed

  • Fix bug where passing a built model to the Simulator more than once would result in an error

  • Cache result of calls to tensor_graph.build_loss/build_optimizer, so that we don’t unnecessarily create duplicate elements in the graph on repeated calls

  • Fix support for Variables on GPU when unroll_simulation=False

  • SimPyFunc operators will always be assigned to CPU, even when device="/gpu:0", since there is no GPU kernel

  • Fix bug where Simulator.loss was not being computed correctly for models with internal state

  • Data/targets passed to Simulator.train will be truncated if not evenly divisible by the specified minibatch size

  • Fixed bug where in some cases Nodes with side effects would not be run if their output was not used in the simulation

  • Fixed bug where strided reads that cover a full array would be interpreted as non-strided reads of the full array

0.2.0 (March 13, 2017)

Initial release of TensorFlow-based NengoDL

0.1.0 (June 12, 2016)

Initial release of Lasagne-based NengoDL

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nengo-dl-0.5.2.tar.gz (132.9 kB view hashes)

Uploaded Source

Built Distribution

nengo_dl-0.5.2-1-py2.py3-none-any.whl (119.2 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page