Deep learning integration for Nengo
Project description
NengoDL is a simulator for Nengo models. That means it takes a Nengo network as input, and allows the user to simulate that network using some underlying computational framework (in this case, TensorFlow).
In practice, what that means is that the code for constructing a Nengo model is exactly the same as it would be for the standard Nengo simulator. All that changes is that we use a different Simulator class to execute the model.
For example:
import nengo
import nengo_dl
import numpy as np
with nengo.Network() as net:
inp = nengo.Node(output=np.sin)
ens = nengo.Ensemble(50, 1, neuron_type=nengo.LIF())
nengo.Connection(inp, ens, synapse=0.1)
p = nengo.Probe(ens)
with nengo_dl.Simulator(net) as sim: # this is the only line that changes
sim.run(1.0)
print(sim.data[p])
However, NengoDL is not simply a duplicate of the Nengo simulator. It also adds a number of unique features, such as:
optimizing the parameters of a model through deep learning training methods
faster simulation speed, on both CPU and GPU
inserting networks defined using TensorFlow (such as convolutional neural networks) directly into a Nengo model
More details can be found in the NengoDL documentation.
Installation
Installation instructions can be found here.
Release History
0.3.0 (April 25, 2017)
Added
Use logger for debug/builder output
Implemented TensorFlow gradients for sparse Variable update Ops, to allow models with those elements to be trained
Added tutorial/examples on using Simulator.train
Added support for training models when unroll_simulation=False
Compatibility changes for Nengo 2.4.0
Added a new graph planner algorithm, which can improve simulation speed at the cost of build time
Changed
Significant improvements to simulation speed
Use sparse Variable updates for signals.scatter/gather
Improved graph optimizer memory organization
Implemented sparse matrix multiplication op, to allow more aggressive merging of DotInc operators
Significant improvements to build speed
Added early termination to graph optimization
Algorithmic improvements to graph optimization functions
Reorganized documentation to more clearly direct new users to relevant material
Fixed
Fix bug where passing a built model to the Simulator more than once would result in an error
Cache result of calls to tensor_graph.build_loss/build_optimizer, so that we don’t unnecessarily create duplicate elements in the graph on repeated calls
Fix support for Variables on GPU when unroll_simulation=False
SimPyFunc operators will always be assigned to CPU, even when device="/gpu:0", since there is no GPU kernel
Fix bug where Simulator.loss was not being computed correctly for models with internal state
Data/targets passed to Simulator.train will be truncated if not evenly divisible by the specified minibatch size
Fixed bug where in some cases Nodes with side effects would not be run if their output was not used in the simulation
Fixed bug where strided reads that cover a full array would be interpreted as non-strided reads of the full array
0.2.0 (March 13, 2017)
Initial release of TensorFlow-based NengoDL
0.1.0 (June 12, 2016)
Initial release of Lasagne-based NengoDL
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for nengo_dl-0.3.0-1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 47b29c82bddc540ce7c7e9287e31617c35d70bb1b112536af3eb8d25dd666d68 |
|
MD5 | 178f4b6d60d5e3cf70d30f02966eaaa6 |
|
BLAKE2b-256 | d2d0d8bae1dc2394e8efc4fa26edf26e1d9eedf010fcf562f488b58666a3fa4a |