Skip to main content

Spark: Modular Spiking Neural Networks

Project description

⚡ Spark: Modular Spiking Neural Networks

Spark Logo

Build, train, and deploy state-of-the-art Spiking Neural Networks with a powerful visual interface and JAX.

License GitHub Stars PyPI version

Spark is a next-generation framework designed to simplify and accelerate the research, development, and deployment of non-gradient-base Spiking Neural Networks (SNNs). Our goal is to make SNNs more accessible to researchers, engineers, and enthusiasts by abstracting away boilerplate code and providing intuitive tools for model creation and experimentation, while maintaing state-of-the-art performance.

Key Features (Philosophy?)

High-Performance Backend: Powered by JAX and Flax NNX, Spark enables just-in-time (JIT) compilation and state management of entire models.

🧩 Modular & Extensible: Modular by construction. Everything (that is worth interacting) in Spark is a self-contained module. Easily create, modify, and share custom neuron models, synapses, and plasticity rules. Ever wanted a neuron with 3 Somas, 2 sets of Synapses and 2.5 Plasticity rules? As long as it can spike, you came to the right place!

🔄 Seamless Workflow: Spiking neural networks are not special, why should they require special data?!. One of the core features of Spark is the concept of input and output interfaces which are simple modules that help you transform regular datasets into streams of spikes and transform streams of spikes back into boring data formats like floats.

🧠 Graph Editor: Design complex SNN architectures by dragging, dropping, and connecting pre-built neural components. No coding required for model design.

Getting Started

Spark is available on PyPI, so it can be installed with:

pip install spark-snn

Or, for the latest development version, clone this repository:

git clone https://github.com/nogarx/spark.git
cd spark
pip install -e .

Do not know what to do next? The tutorials are here to help you!

The Spark Graph Editor

Design your network's structure, set parameters for each component, and connect them to create a model.

Spark Graph Editor

The Spark visual interface for building SNNs.

You can start the Graph Editor through the CLI,

editor = spark.GraphEditor()
editor.launch()

or inside a Notebook!

%gui qt
editor = spark.GraphEditor()
if __name__ == "__main__":
    editor.launch()

Afterwards you can simply export your model and build it.

import spark

# Initialize the configuration class
brain_config = spark.nn.BrainConfig.from_file('./my_awesome_model.scfg')

# Construct the brain.
brain = spark.nn.Brain(config=brain_config)

# Build the brain.
brain(**my_awesome_inputs)

To harvest the true power of Spark your model needs to be JIT compiled. Jax requires your model to be traceable, which sometimes can be quite unintutive. Fortunately, this is quite simple in Spark, and most of the time it will look very similar to the next function.

import jax

@jax.jit
def run_model(graph, state, **my_awesome_inputs):
    # Reconstruct model
	model = spark.merge(graph, state)
    # Compute
	out = model(**my_awesome_inputs)
    # Split the model and recover its state.
    _, state = spark.split((model))
	return out, state

Finally, do the initial split of the model and use your function.

import jax.numpy as jnp

# Some dummy data
my_awesome_inputs = {
    x_0 = jnp.ones((64,), dtype=jnp.float16)
    ...
    x_k = jnp.ones((128,), dtype=jnp.float16)
}

# Split the model
graph, state = spark.split((model))

# Use run_model and reuse state!
out, state = run_model(graph, state, **my_awesome_inputs)

# State now contains the updated state of the model!.

There is much more to Spark that what we can show here, if you are ready to learn more you can go to the tutorial section!

Roadmap

We have many exciting features planned.

🔥 Components, a lot of them: Spark is built around the idea of modular neurons. Literature is full of really interesting ideas but integrating them to existing code is sometimes annoying and prone to errors. One of our goals is to transform those ideas into modular, reusable and plugable code.

📊 Built-in Visualization: (Maybe Coming Soon?) Tools for visualizing spike trains, membrane potentials, and network activity in real-time.

🧮 Custom kernels: Spark is fast but it can be faster!. Several operations can be further optimized using custom kernels.

Contributing

Contributions are what make the open-source community such an amazing place. Any contributions you make are greatly appreciated. Want to contribute but you do not know where to start? The discussion forum! may be the place you are looking for!.

Citing Spark

You can use the following reference to cite this repository,

@software{spark_snn_github,
  author = {Mario Franco, Carlos Gershenson},
  title = {Spark: Modular Spiking Neural Networks},
  url = {https://github.com/nogarx/spark/},
  version = {0.1.0},
  year = {2025},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spark_snn-1.0.0.tar.gz (163.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spark_snn-1.0.0-py3-none-any.whl (197.1 kB view details)

Uploaded Python 3

File details

Details for the file spark_snn-1.0.0.tar.gz.

File metadata

  • Download URL: spark_snn-1.0.0.tar.gz
  • Upload date:
  • Size: 163.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.8

File hashes

Hashes for spark_snn-1.0.0.tar.gz
Algorithm Hash digest
SHA256 285b1e52a4431998a9e7e98cfe0709f4f960b8547798a207d92d2cfabac47ba5
MD5 ae45dc6af4caf79f221c704031972875
BLAKE2b-256 18a0841cef8ed99546017392a6bcbb784a1e1df5893a9a53785fa66524ddcb5f

See more details on using hashes here.

File details

Details for the file spark_snn-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: spark_snn-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 197.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.8

File hashes

Hashes for spark_snn-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 781b9a503c031e9d69a2280e37e8f6f068a184a65a69771e6a9eb5d5f66d7a11
MD5 3652b9fed113d4b80d7eec2c64c96668
BLAKE2b-256 16f52dee0e4573f3a0db5b307d04f610d23327d7d87f1b576fb61e30f924d3c5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page