Skip to main content

SANA-FE: Simulating Advanced Neuromorphic Architectures for Fast Exploration

Project description

SANA-FE

Copyright (c) 2025 - The University of Texas at Austin

This work was produced under contract #2317831 to National Technology and Engineering Solutions of Sandia, LLC which is under contract No. DE-NA0003525 with the U.S. Department of Energy.

Simulating Advanced Neuromorphic Architectures for Fast Exploration (SANA-FE)

A framework for modeling the energy usage and performance of different neuromorphic hardware.

Citation

We hope that you find this project useful. If you use SANA-FE in your work, please cite our paper:

James A. Boyle, Mark Plagge, Suma George Cardwell, Frances S. Chance, and Andreas Gerstlauer, "SANA-FE: Simulating Advanced Neuromorphic Architectures for Fast Exploration," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 44, no. 8, pp. 3165–3178, 2025, doi:10.1109/TCAD.2025.3537971.

@article{boyle2025sanafe,
  title={SANA-FE: Simulating Advanced Neuromorphic Architectures for Fast Exploration},
  author={James A. Boyle and Mark Plagge and Suma George Cardwell and Frances S. Chance and Andreas Gerstlauer},
  journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD)},
  volume={44},
  number={8},
  pages={3165--3178},
  year={2025},
  doi={10.1109/TCAD.2025.3537971}
}

To Build

This project uses CMake as its build system and dependency manager. To setup compilation, first create a temporary build directory: mkdir build && cd build

Run the following command in this build directory: cmake ..

Then compile SANA-FE and copy it to the project directory by running the command: make -j 12 && make install && cd ..

The option -j indicates the number of parallel threads to use. This should be less than the threads supported by your system.

Dependencies

Building this project requires cmake, make, and a compiler that supports the C++17 standard (e.g., GCC >= 8, Clang >= 5). This project uses RapidYAML for all YAML file parsing, and Booksim 2 for optional cycle-accurate NoC modeling. To build the Python interfaces, you must also have Python >= 3.8 installed with PyBind11. You can install PyBind11 using:

pip install pybind11

To Run an Example

./sim arch/example.yaml snn/example.yaml 100

This simulates 100 time-steps of a tiny connected spiking neural network (SNN).

General usage:

./sim [optional flags] <architecture description> <SNN description> <N timesteps>

In addition to the standlone simulator, SANA-FE can also be scripted using a Python API. For an example of how this can be done, see the Jupyter notebook-based tutorials in the tutorial/ directory.

Additional examples and experiments may be found in the scripts/ directory.

Simulator Inputs

SANA-FE takes command line arguments, an architecture description file (YAML) and an SNN description file (YAML). The description files both use custom file formats. Examples for architectures may be found in arch/. Examples for SNNs may be found in snn/.

Optional command line flags can be used to enable simulation traces. Note that after enabling traces globally, you will still have to create probes at the neuron level to get trace output.

Flags:

  • -m: Enable message traces to messages.csv
  • -n: Use the (legacy) netlist format for SNNs, instead of YAML.
  • -o: Output directory
  • -p: Record the simulated performance of each timestep to perf.csv
  • -s: Enable spike traces to spikes.csv
  • -t [simple/detailed/cycle]: Specify the timing model (default=detailed)
  • -v: Enable potential (voltage) traces to potential.csv
  • -N: Number of neuron/message processing threads (default=1)
  • -S: Number of scheduling threads (default=0, use main thread)

SNN Description

The SNN description format is based on the YAML file format.

Different mapped SNNs can be defined flexibly and generally using sections for neuron groups, edges, and hardware mappings. Each section allows for custom attributes to be defined, which are converted to model parameters within the simulator. While the keywords for sections are fixed, attributes allow for custom user-defined parameters to be associated with neurons and connections.

The SNN must be defined under the main network section. All other top-level sections are ignored. Then, we have groups, edges, and mappings sub-sections.

Groups of neurons are one or more neurons that may share some common attributes. This is similar to how other frameworks may define populations, or layers of similar neurons. How neurons are grouped is up to the user, but they can be useful for sharing common attributes or connections. Under the groups subsection, you must create a list of named neuron groups. Within each group is an attributes section and a neurons section.

In each neurons subsection, list all sets of neurons belonging to the group. For conciseness we support specifying multiple neurons using the range (..) notation. Following each neuron, give an ordered or unordered list of attributes e.g.,

  • 0..2: [attribute1: value1]
  • 3: {attribute1: value1}

In the edges section, define neuron to neuron connections or group to group hyper-edges, including any edge attributes. The edge format uses a notation similar to the graph DOT format e.g.,

  • layer1.0 -> layer2.1: [weight: 1]
  • layer1 -> layer2: [weight: 1]

Finally, in the mappings section we map neurons to hardware cores. Under the section heading is a list of mappings, with an example of one mapping as follows:

  • layer1.0..1: [core: 0.0]

Similar to before, neurons may give as a range for brevity. This maps two neurons to tile 0, core 0 (the first core in the chip).

As long as valid YAML syntax is used, SANA-FE does not distinguish between different styles (block style, flow style, or a mix of the two). For one example of a simple SNN, see snn/example.yaml.

Architecture Description

The architecture description format is also based on the YAML file format.

Different architectures are defined using a hierarchical description. This tool models neuromorphic designs with several assumptions, in order to simplify the tool.

  1. The chip is time-step based. A time-step is a small discrete amount of time. This is as opposed to a purely event driven simulation e.g. ROSS.
  2. The neural cores adhere to some common design patterns

At the top level, the description begins with the "architecture" keyword. Any other top-level sections will be ignored. This defines anything at the chip level, including the NoC interconnect.

A chip contains one or more network tiles, representing some shared network resources e.g., a router. Each tile contains one or more cores, where a core performs computation. Each neuromorphic core contains a fixed spike processing hardware pipeline. It is assumed that tiles and cores are all parallel processing elements.

Each core is assumed to have a neuromorphic pipeline which processes the updates for one or more neurons. The pipeline is a fixed sequence of niche hardware units. Those hardware units could contain digital logic, analog circuits or even novel devices.

The pipeline contains the following units:

  • The input axon unit receive spike packets from the network and generate synaptic addresses for memory lookups.

  • The synaptic unit looks up connectivity for incoming spikes and updates any relevant synaptic currents.

  • The dendritic unit combines currents based some internal structure and a set of operations.

  • The soma unit updates membrane potentials based on the dendritic current and neuron model. If the firing criteria is met, it generates a spike for that neuron.

  • The output axon unit send spikes from the soma out to the network to go to other cores' pipelines.

For an example, see arch/loihi.yaml. There are a nested series of keywords, where keywords define required hardware units. Each block must be contain a name keyword, which may optionally specify the number of instances. Units are duplicated the number specified in the range, for example:

# Define 8 cores, 0 through 7
-name: neuromorphic_core[0..7]

Units must also have both an attributes section and the next hardware units in the hierarchy. The attributes section will generate one or more parameters that are passed to be parsed by the simulator and the relevant hardware models implemented either internally (models.cpp) or externally (plugins).

Simulator Outputs

If corresponding traces are enabled, output is saved to trace files with hard-coded names using either csv or yaml extensions.

spikes.csv: The spikes for each time-step on probed neurons

potential.csv: The potentials for each time-step on probed neurons

perf.csv: Detailed statistics for each timestep and each hardware unit

messages.csv: Information on spike messages for each time-step

run_summary.yaml: High-level statistics for the simulation e.g. runtime

Simulator Kernel

SANA-FE uses a user-provided spiking architecture, a mapped SNN, and run-time configuration to simulate a spiking chip as it executes a spiking application. SANA-FE uses the Architecture to compile a SpikingChip, which it then loads the mapped SNN. SANA-FE then rapidly simulates the design at a time-step granularity.

During each time-step SANA-FE models custom spike-processing pipelines executing within each core, modeling the processing of neurons and spike messages. Using our spiking hardware template, we enable custom hardware blocks to be incorporated for axonal, synaptic, dendritic and somatic hardware. Each hardware unit is implemented using a model - you can take the built-in hardware unit models provided in models.cpp, or implement models externally as hardware unit plugins using the fixed base class interfaces. The SANA-FE kernel coordinates all on-chip activity, makes calls to the models and tracks the total energy and latency across the chip.

SANA-FE includes efficient but detailed semi-analytical timing models. This takes aggregated information about all spike messages generated in a time-step and calls a custom scheduler in schedule.cpp. The on chip schedule ultimately gives you a reasonably accurate prediction of the chip timings, accounting for effects such as blocking in the NoC and custom latency simulations within hardware units.

Plugins

As part of SANA-FE, the user can implement different hardware models using custom plugins. Models for synapses, dendrites and somas are all supported. SANA-FE supports a base hardware model base class, with which it implements all of its synaptic, dendritic and somatic hardware models. Using SANA-FE's PipelineUnit base class, you can implement your own models as hardware plugins.

Using Plugins

There is one example already provided in the /plugins folder implementing a Hodgkin-Huxley neuron model (hodgkin_huxley.cpp). There are a few steps required to use plugins in SANA-FE:

  1. Specify the plugin path in the architecture yaml file, in the corresponding synapse, dendrite or soma hardware section. Specify the plugin path using the attribute plugin: <pathname>.
  2. Specify the model name using the attribute model: <name>.
  3. Map neurons to the hardware unit as usual with the attribute: soma_hw_name.

For example, for the Hodgkin-Huxley example provided with SANA-FE, you could use it as follows:

# Rest of arch description
...
soma:
- name: plugin_example_soma
  attributes:
    plugin: plugins/hodgkin_huxley.cpp
    model: HodgkinHuxley
...

Creating a New Plugin

SANA-FE can run any models provided as user plugins. The plugin must be compiled as a shared library containing one or more hardware models. Models can execute arbitrary code, but interfaces must be derived either from the general PipelineUnit class, or one of the specialized SynapseUnit, DendriteUnit or SomaUnit base classes.

SANA-FE's plugin mechanism makes it easy to integrate plugins with your architectural simulations. However, a few steps are needed to get plugins running:

  1. You must make sure your plugin has been built as a shared library (.so), either by updating the plugin CMake file or providing your own build scripts.
  2. Your new plugin must implement a hardware model class with the hardware functionality you want. The model class you implement must be derived from PipelineUnit in chip.hpp, which defines the required interfaces. These are enforced by pure virtual methods, including attribute parsing methods update methods. For examples of different hardware models, see either models.cpp or the plugins folder.
  3. Finally, provide a class factory function that returns a new instance of your model class. This has to be in the format create_<modelname>. For example, for a HodgkinHuxley model, we would specify the following code in the plugin C++ file:
extern "C" sanafe::PipelineUnit *create_HodgkinHuxley()
{
    return (sanafe::PipelineUnit *) new HodgkinHuxley();
}

It is recommended new users look through the rest of the hodgkin_huxley.cpp file to see what an example plugin looks like.

Legacy SNN Description (Netlist) Format

Version 1 of SANA-FE (written in C) defined a simpler, less capable SNN description format (compared to the current YAML-based format). For back-compatability, the netlist-style format is still supported. To use this format, use the command-line flag (-n).

In the netlist format each line defines a new entry, which may either be a neuron group (g), neuron (n), edge (e), or mapping (&). Each line starts with the type of entry followed by one required field and then any number of named attributes. Fields are separated by one or more spaces.

Attributes are defined using the syntax: <attribute>=<value>. Note, there is no space before or after the equals. The attribute soma_hw_name is required to be set for every neuron or neuron group.

A neuron group helps reduce the number of repeated, shared parameters for a population of neurons e.g., for a layer of neurons in a deep SNN.

g <number of neurons> <common attributes>

Neurons are addressed (using the group number followed by neuron number), and then all attributes are specified. Note the group must be defined first.

n group_id.neuron_id <unique attributes>

An edge connects one source neuron (presynaptic) to one destination neuron (postsynaptic). The edge may also have attributes such as synaptic weight.

e src_group_id.src_neuron_id->dest_group_id.dest_neuron_id <edge attributes>

Finally, mappings place predefined neurons on a hardware core. Here we specify the neuron and the core.

& group_id.neuron_id@tile_id.core_id

An example of how to use the netlist format is given in snn/example.net

Project Code

This project has been written in C++ and Python. All code is in the /src directory. See header files for more detail on supported classes and functions.

C++ code has been written using the C++17 standard.

References

James A. Boyle, Jason Ho, Mark Plagge, Suma George Cardwell, Frances S. Chance, and Andreas Gerstlauer, "Exploring Dendrites in Large-Scale Neuromorphic Architectures," in International Conference on Neuromorphic Systems (ICONS), Seattle, WA, USA, 2025, doi:10.1109/ICONS69015.2025.00018.

James A. Boyle, Mark Plagge, Suma George Cardwell, Frances S. Chance, and Andreas Gerstlauer, "SANA-FE: Simulating Advanced Neuromorphic Architectures for Fast Exploration," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol. 44, no. 8, pp. 3165–3178, 2025, doi:10.1109/TCAD.2025.3537971.

James A. Boyle, Mark Plagge, Suma George Cardwell, Frances S. Chance, and Andreas Gerstlauer, "Tutorial: Large-Scale Spiking Neuromorphic Architecture Exploration using SANA-FE," in International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), Raleigh, NC, USA, 2024, doi:10.1109/CODES-ISSS60120.2024.00007.

James A. Boyle, Mark Plagge, Suma George Cardwell, Frances S. Chance, and Andreas Gerstlauer, "Performance and Energy Simulation of Spiking Neuromorphic Architectures for Fast Exploration," in International Conference on Neuromorphic Systems (ICONS), Santa Fe, NM, USA, 2023, doi:10.1145/3589737.3605970.

Contact

James Boyle: james.boyle@utexas.edu

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

sanafe-2.1.1-cp314-cp314-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.28+ x86-64

sanafe-2.1.1-cp313-cp313-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ x86-64

sanafe-2.1.1-cp312-cp312-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ x86-64

sanafe-2.1.1-cp311-cp311-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ x86-64

sanafe-2.1.1-cp310-cp310-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ x86-64

sanafe-2.1.1-cp39-cp39-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.28+ x86-64

sanafe-2.1.1-cp38-cp38-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.28+ x86-64

File details

Details for the file sanafe-2.1.1-cp314-cp314-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for sanafe-2.1.1-cp314-cp314-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 247ab281ef905e74ab73282552afc56dffe71c3d8ca84ad4ada0a388b4a65ec2
MD5 912c2cae0d15e64dff0d5e1a577edee1
BLAKE2b-256 ea2376b56fe5e5a8fe4d2b34e18fa4cd7163d73fdbf77aad19cf91987225cac3

See more details on using hashes here.

File details

Details for the file sanafe-2.1.1-cp313-cp313-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for sanafe-2.1.1-cp313-cp313-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 d0007de7ace9b03f9633ab9870336fb83d46c622f065c8bcafdec90b060cdba2
MD5 6dcd5a9250d92bd92ade7150d6744b2e
BLAKE2b-256 c398b2cfe72039490919c09e56f51ced835686fff96a43c431f9db30c6a9c898

See more details on using hashes here.

File details

Details for the file sanafe-2.1.1-cp312-cp312-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for sanafe-2.1.1-cp312-cp312-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 d4fc53fd1a5922b51f4b3a5a7a8aaea3a2c756d0244b47e4ce3d8b8d79768fd7
MD5 5849f7a73dce818cd4f9e0b32b6d6ca7
BLAKE2b-256 e5e5bedb6bda761e4fd8b4412a4b09fbae363dc0f05a1ef9e54e1572fc289721

See more details on using hashes here.

File details

Details for the file sanafe-2.1.1-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for sanafe-2.1.1-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 762635a55acabfe89b8e3d6943a254c874ff952eda74338362d079c454a502b9
MD5 c0535d836e295d7828bc18a7c12fdadd
BLAKE2b-256 762d4595ecaa3e69e0fa8271e10fdad244e6e90d892b3c052fd664b36853e862

See more details on using hashes here.

File details

Details for the file sanafe-2.1.1-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for sanafe-2.1.1-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 d36036548f4ae896d0984c0d8903faca507578ca3eb7ef1703b66317a443bdba
MD5 c767a6350ebd27a5e2981e68f60516e2
BLAKE2b-256 5124fd848c029fbbf51d2dad8a77c20e910c328c954fec9656c5d40a91eac7d9

See more details on using hashes here.

File details

Details for the file sanafe-2.1.1-cp39-cp39-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for sanafe-2.1.1-cp39-cp39-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 d583a4567f9bdce3305f13a0e1af611dd34f68dc19b0fc638d112224975f8154
MD5 7f3c88e78b32c1fb0de61327c6535359
BLAKE2b-256 183d08018ef3db1fb66ed022a191bc2dd60e13b14561452fd987b696c4f2ca49

See more details on using hashes here.

File details

Details for the file sanafe-2.1.1-cp38-cp38-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for sanafe-2.1.1-cp38-cp38-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 db4afb0c416313efabfe46dc65f9893454394e8e3a504f28f3aeff8d364d97e0
MD5 6d744ddd3b9deba7e73e6ed7b650ce88
BLAKE2b-256 3593ff2fdce15bc51b67c0fbadaeef39571b027b8c6329e84431d4f3f642b714

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page