Nengo: Large-scale brain modelling in Python
Nengo depends on NumPy, and we recommend that you
install NumPy before installing Nengo.
If you’re not sure how to do this, we recommend using
To install Nengo:
pip install nengo
If you have difficulty installing Nengo or NumPy,
please read the more detailed
Nengo installation instructions first.
If you’d like to install Nengo from source,
please read the developer installation instructions.
Nengo is tested to work on Python 2.7 and 3.4+.
Questions relating to Nengo, whether it’s use or it’s development, should be
asked on the Nengo forum at https://forum.nengo.ai.
2.3.1 (February 18, 2017)
- Added documentation on config system quirks.
- Added nengo.utils.network.activate_direct_mode function to make it
easier to activate direct mode in networks where some parts require neurons.
- The matrix multiplication example will now work with matrices of any size
and uses the product network for clarity.
- Fixed instances in which passing a callable class as a function could fail.
- Fixed an issue in which probing some attributes would be one timestep
faster than other attributes.
- Fixed an issue in which SPA models could not be copied.
- Fixed an issue in which Nengo would crash if other programs
had locks on Nengo cache files in Windows.
- Integer indexing of Nengo objects out of range raises an IndexError
now to be consistent with standard Python behaviour.
- Documentation that applies to all Nengo projects has been moved to
2.3.0 (November 30, 2016)
- It is now possible to probe scaled_encoders on ensembles.
- Added copy method to Nengo objects. Nengo objects can now be pickled.
- A progress bar now tracks the build process
in the terminal and Jupyter notebook.
- Added nengo.dists.get_samples function for convenience
when working with distributions or samples.
- Access to probe data via nengo.Simulator.data is now cached,
making repeated access much faster.
- Access to nengo.Simulator.model is deprecated. To access static data
generated during the build use nengo.Simulator.data. It provides access
to everything that nengo.Simulator.model.params used to provide access to
and is the canonical way to access this data across different backends.
2.2.0 (September 12, 2016)
- It is now possible to pass a NumPy array to the function argument
of nengo.Connection. The values in the array are taken to be the
targets in the decoder solving process, which means that the eval_points
must also be set on the connection.
- nengo.utils.connection.target_function is now deprecated, and will
be removed in Nengo 3.0. Instead, pass the targets directly to the
connection through the function argument.
- Dropped support for NumPy 1.6. Oldest supported NumPy version is now 1.7.
- Added a nengo.backends entry point to make the reference simulator
discoverable for other Python packages. In the future all backends should
declare an entry point accordingly.
- Added ShapeParam to store array shapes.
- Added ThresholdingPreset to configure ensembles for thresholding.
- Tweaked rasterplot so that spikes from different neurons don’t overlap.
- Added a page explaining the config system and preset configs.
- Fixed some situations where the cache index becomes corrupt by
writing the updated cache index atomically (in most cases).
- The synapse methods filt and filtfilt now support lists as input.
- Added a registry system so that only stable objects are cached.
- Nodes now support array views as input.
2.1.2 (June 27, 2016)
- The DecoderCache is now more robust when used improperly, and no longer
requires changes to backends in order to use properly.
2.1.1 (June 24, 2016)
- Improved the default LIF neuron model to spike at the same rate as the
LIFRate neuron model for constant inputs. The older model has been
moved to nengo_extras
under the name FastLIF.
- Added y0 attribute to WhiteSignal, which adjusts the phase of each
dimension to begin with absolute value closest to y0.
- Allow the AssociativeMemory to accept Semantic Pointer expressions as
input_keys and output_keys.
- The DecoderCache is used as context manager instead of relying on the
__del__ method for cleanup. This should solve problems with the
cache’s file lock not being removed. It might be necessary to
manually remove the index.lock file in the cache directory after
upgrading from an older Nengo version.
- If the cache index is corrupted, we now fail gracefully by invalidating
the cache and continuing rather than raising an exception.
- The Nnls solver now works for weights. The NnlsL2 solver is
improved since we clip values to be non-negative before forming
the Gram system.
- Eliminate memory leak in the parameter system.
- Allow recurrence of the form a=b, b=a in basal ganglia SPA actions.
- Support a greater range of Jupyter notebook and ipywidgets versions with the
the ipynb extensions.
2.1.0 (April 27, 2016)
- A new class for representing stateful functions called Process
has been added. Node objects are now process-aware, meaning that
a process can be used as a node’s output. Unlike non-process
callables, processes are properly reset when a simulator is reset.
See the processes.ipynb example notebook, or the API documentation
for more details.
- Spiking LIF neuron models now accept an additional argument,
min_voltage. Voltages are clipped such that they do not drop below
this value (previously, this was fixed at 0).
- The PES learning rule no longer accepts a connection as an argument.
Instead, error information is transmitted by making a connection to the
learning rule object (e.g.,
- The modulatory attribute has been removed from nengo.Connection.
This was only used for learning rules to this point, and has been removed
in favor of connecting directly to the learning rule.
- Connection weights can now be probed with nengo.Probe(conn, 'weights'),
and these are always the weights that will change with learning
regardless of the type of connection. Previously, either decoders or
transform may have changed depending on the type of connection;
it is now no longer possible to probe decoders or transform.
- A version of the AssociativeMemory SPA module is now available as a
stand-alone network in nengo.networks. The AssociativeMemory SPA module
also has an updated argument list.
- The Product and InputGatedMemory networks no longer accept a
config argument. (#814)
- The EnsembleArray network’s neuron_nodes argument is deprecated.
Instead, call the new add_neuron_input or add_neuron_output methods.
- The nengo.log utility function now takes a string level parameter
to specify any logging level, instead of the old binary debug parameter.
Cache messages are logged at DEBUG instead of INFO level.
- Reorganised the Associative Memory code, including removing many extra
parameters from nengo.networks.assoc_mem.AssociativeMemory and modifying
the defaults of others.
- Add close method to Simulator. Simulator can now be used
used as a context manager.
- Most exceptions that Nengo can raise are now custom exception classes
that can be found in the nengo.exceptions module.
- All Nengo objects (Connection, Ensemble, Node, and Probe)
now accept a label and seed argument if they didn’t previously.
- In nengo.synapses, filt and filtfilt are deprecated. Every
synapse type now has filt and filtfilt methods that filter
using the synapse.
- Connection objects can now accept a Distribution for the transform
argument; the transform matrix will be sampled from that distribution
when the model is built.
- The sign on the PES learning rule’s error has been flipped to conform
with most learning rules, in which error is minimized. The error should be
actual - target. (#642)
- The PES rule’s learning rate is invariant to the number of neurons
in the presynaptic population. The effective speed of learning should now
be unaffected by changes in the size of the presynaptic population.
Existing learning networks may need to be updated; to achieve identical
behavior, scale the learning rate by pre.n_neurons / 100.
- The probeable attribute of all Nengo objects is now implemented
as a property, rather than a configurable parameter.
- Node functions receive x as a copied NumPy array (instead of a readonly
- The SPA Compare module produces a scalar output (instead of a specific
- Bias nodes in spa.Cortical, and gate ensembles and connections in
spa.Thalamus are now stored in the target modules.
- The filt and filtfilt functions on Synapse now use the initial
value of the input signal to initialize the filter output by default. This
provides more accurate filtering at the beginning of the signal, for signals
that do not start at zero.
- Added Ensemble.noise attribute, which injects noise directly into
neurons according to a stochastic Process.
- Added a randomized_svd subsolver for the L2 solvers. This can be much
quicker for large numbers of neurons or evaluation points.
- Added PES.pre_tau attribute, which sets the time constant on a lowpass
filter of the presynaptic activity.
- EnsembleArray.add_output now accepts a list of functions
to be computed by each ensemble.
- LinearFilter now has an analog argument which can be set
through its constructor. Linear filters with digital coefficients
can be specified by setting analog to False.
- Added SqrtBeta distribution, which describes the distribution
of semantic pointer elements.
- Added Triangle synapse, which filters with a triangular FIR filter.
- Added utils.connection.eval_point_decoding function, which
provides a connection’s static decoding of a list of evaluation points.
- Resetting the Simulator now resets all Processes, meaning the
injected random signals and noise are identical between runs,
unless the seed is changed (which can be done through
- An exception is raised if SPA modules are not properly assigned to an SPA
- The Product network is now more accurate.
- Numpy arrays can now be used as indices for slicing objects.
- Config.configures now accepts multiple classes rather than
just one. (#842)
- Added add method to spa.Actions, which allows
actions to be added after module has been initialized.
- Added SPA wrapper for circular convolution networks, spa.Bind
- Added the Voja (Vector Oja) learning rule type, which updates an
ensemble’s encoders to fire selectively for its inputs. (see
- Added a clipped exponential distribution useful for thresholding, in
particular in the AssociativeMemory.
- Added a cosine similarity distribution, which is the distribution of the
cosine of the angle between two random vectors. It is useful for setting
intercepts, in particular when using the Voja learning rule.
- nengo.synapses.LinearFilter now has an evaluate method to
evaluate the filter response to sine waves of given frequencies. This can
be used to create Bode plots, for example.
- nengo.spa.Vocabulary objects now have a readonly attribute that
can be used to disallow adding new semantic pointers. Vocabulary subsets
are read-only by default.
- Improved performance of the decoder cache by writing all decoders
of a network into a single file.
- Fixed issue where setting Connection.seed through the constructor had
no effect. (#724)
- Fixed issue in which learning connections could not be sliced.
- Fixed issue when probing scalar transforms.
- Fix for SPA actions that route to a module with multiple inputs.
- Corrected the rmses values in BuiltConnection.solver_info when using
NNls and Nnl2sL2 solvers, and the reg argument for Nnl2sL2.
- spa.Vocabulary.create_pointer now respects the specified number of
creation attempts, and returns the most dissimilar pointer if none can be
found below the similarity threshold.
- Probing a Connection’s output now returns the output of that individual
Connection, rather than the input to the Connection’s post Ensemble.
- Fixed thread-safety of using networks and config in with statements.
- The decoder cache will only be used when a seed is specified.
2.0.4 (April 27, 2016)
- Cache now fails gracefully if the legacy.txt file cannot be read.
This can occur if a later version of Nengo is used.
2.0.3 (December 7, 2015)
- The spa.State object replaces the old spa.Memory and spa.Buffer.
These old modules are deprecated and will be removed in 2.2.
2.0.2 (October 13, 2015)
2.0.2 is a bug fix release to ensure that Nengo continues
to work with more recent versions of Jupyter
(formerly known as the IPython notebook).
- The IPython notebook progress bar has to be activated with
- Added [progress] section to nengorc which allows setting
progress_bar and updater.
- Fix compatibility issues with newer versions of IPython,
and Jupyter. (#693)
2.0.1 (January 27, 2015)
- Node functions receive t as a float (instead of a NumPy scalar)
and x as a readonly NumPy array (instead of a writeable array).
- rasterplot works with 0 neurons, and generates much smaller PDFs.
- Fix compatibility with NumPy 1.6.
2.0.0 (January 15, 2015)
Initial release of Nengo 2.0!
Supports Python 2.6+ and 3.3+.
Thanks to all of the contributors for making this possible!