Skip to main content

Deep learning integration for Nengo

Project description

Latest PyPI version Travis-CI build status Test coverage

NengoDL: Deep learning integration for Nengo

NengoDL is a simulator for Nengo models. That means it takes a Nengo network as input, and allows the user to simulate that network using some underlying computational framework (in this case, TensorFlow).

In practice, what that means is that the code for constructing a Nengo model is exactly the same as it would be for the standard Nengo simulator. All that changes is that we use a different Simulator class to execute the model.

For example:

import nengo
import nengo_dl
import numpy as np

with nengo.Network() as net:
    inp = nengo.Node(output=np.sin)
    ens = nengo.Ensemble(50, 1, neuron_type=nengo.LIF())
    nengo.Connection(inp, ens, synapse=0.1)
    p = nengo.Probe(ens)

with nengo_dl.Simulator(net) as sim: # this is the only line that changes
    sim.run(1.0)

print(sim.data[p])

However, NengoDL is not simply a duplicate of the Nengo simulator. It also adds a number of unique features, such as:

  • optimizing the parameters of a model through deep learning training methods

  • faster simulation speed, on both CPU and GPU

  • inserting networks defined using TensorFlow (such as convolutional neural networks) directly into a Nengo model

More details can be found in the NengoDL documentation.

Installation

Installation instructions can be found here.

Release History

0.4.0 (June 8, 2017)

Added

Changed

  • Updated TensorFuncParam to new Nengo Param syntax

  • The interface for Simulator step_blocks/unroll_simulation has been changed. Now unroll_simulation takes an integer as argument which is equivalent to the old step_blocks value, and unroll_simulation=1 is equivalent to the old unroll_simulation=False. For example, Simulator(..., unroll_simulation=True, step_blocks=10) is now equivalent to Simulator(..., unroll_simulation=10).

  • Simulator.train/Simulator.loss no longer require step_blocks (or the new unroll_simulation) to be specified; the number of steps to train across will now be inferred from the input data.

0.3.1 (May 12, 2017)

Added

  • Added more documentation on Simulator arguments

Changed

  • Improved efficiency of tree_planner, made it the new default planner

Fixed

  • Correctly handle input feeds when n_steps > step_blocks

  • Detect cycles in transitive planner

  • Fix bug in uneven step_blocks rounding

  • Fix bug in Simulator.print_params

  • Fix bug related to merging of learning rule with different dimensionality

  • Use tf.Session instead of tf.InteractiveSession, to avoid strange side effects if the simulator isn’t closed properly

0.3.0 (April 25, 2017)

Added

  • Use logger for debug/builder output

  • Implemented TensorFlow gradients for sparse Variable update Ops, to allow models with those elements to be trained

  • Added tutorial/examples on using Simulator.train

  • Added support for training models when unroll_simulation=False

  • Compatibility changes for Nengo 2.4.0

  • Added a new graph planner algorithm, which can improve simulation speed at the cost of build time

Changed

  • Significant improvements to simulation speed

    • Use sparse Variable updates for signals.scatter/gather

    • Improved graph optimizer memory organization

    • Implemented sparse matrix multiplication op, to allow more aggressive merging of DotInc operators

  • Significant improvements to build speed

    • Added early termination to graph optimization

    • Algorithmic improvements to graph optimization functions

  • Reorganized documentation to more clearly direct new users to relevant material

Fixed

  • Fix bug where passing a built model to the Simulator more than once would result in an error

  • Cache result of calls to tensor_graph.build_loss/build_optimizer, so that we don’t unnecessarily create duplicate elements in the graph on repeated calls

  • Fix support for Variables on GPU when unroll_simulation=False

  • SimPyFunc operators will always be assigned to CPU, even when device="/gpu:0", since there is no GPU kernel

  • Fix bug where Simulator.loss was not being computed correctly for models with internal state

  • Data/targets passed to Simulator.train will be truncated if not evenly divisible by the specified minibatch size

  • Fixed bug where in some cases Nodes with side effects would not be run if their output was not used in the simulation

  • Fixed bug where strided reads that cover a full array would be interpreted as non-strided reads of the full array

0.2.0 (March 13, 2017)

Initial release of TensorFlow-based NengoDL

0.1.0 (June 12, 2016)

Initial release of Lasagne-based NengoDL

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nengo-dl-0.4.0.tar.gz (88.8 kB view hashes)

Uploaded Source

Built Distribution

nengo_dl-0.4.0-1-py2.py3-none-any.whl (102.7 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page