Skip to main content

Spiking neural network models of V1 development using Brian2

Project description

briar

Large-Scale SNN framework with dendrites, built on Brian2

Research by Zubin Kane -- Makin Lab at Purdue University

Overview

briar provides pre-built architectures for studying how orientation selectivity and phase diversity emerge in visual cortex through spike-timing-dependent plasticity (STDP). It wraps Brian2 with a declarative layer for defining neuron pools, synapse pools, and learning rules, then handles device setup, compilation, monitoring, and result serialization automatically.

Key features:

  • Declarative architecture definition -- define pools and synapses as dataclasses, briar generates the Brian2 equations
  • Built-in architectures for common V1 models (feedforward, two-layer simple/complex, efficient coding)
  • Custom architecture for building networks from scratch
  • cpp_standalone and CUDA support with incremental compilation for fast repeated runs
  • Parameter sweeps with automatic grid search
  • Rich result objects with summary dashboards, diffs, and architecture-aware plotting

Installation

pip install briar

Quick Start

The simplest way to run an experiment is to pick an architecture and a task, then call buildrun():

from briar import SimpleComplex, NaturalImageTask

task = NaturalImageTask(n_patterns=500, image_size=32)
arch = SimpleComplex(task)
results = arch.buildrun()

This builds the full Brian2 network, runs the simulation, and returns a Results object containing all parameters, spike monitors, weight histories, and connectivity.

Inspecting results

# In Jupyter, just evaluate `results` to see the rich dashboard
results

# In a script, print() gives the same dashboard in plain text
print(results)

# results.summary() also prints the dashboard (same as print)
results.summary()

# Show only parameters that differ from defaults
results.diff()

# Architecture-aware default plots
results.plot()
results.plot(full=True)    # additional diagnostics
results.plot(debug=True)   # low-level debug panels

Modifying parameters

Every architecture parameter can be overridden at construction. The diff() method shows exactly what changed:

arch = SimpleComplex(
    task,
    eta_ff_simple=1e-3,       # increase simple cell learning rate
    ff_complex_radius=6.0,    # widen complex cell receptive fields
)
results = arch.buildrun()

# diff() highlights only the non-default values
results.diff()

Adding pools to an existing architecture

Any architecture can be extended with additional pools after construction. Added pools are automatically discovered and built:

from briar import SimpleComplex, NaturalImageTask, NeuronPool, SynapsePool

task = NaturalImageTask(n_patterns=500, image_size=32)
arch = SimpleComplex(task)

# Add a new neuron pool and connect it
arch.add(NeuronPool(name='readout', n_neurons=8))
arch.add(SynapsePool(
    name='ff_readout',
    source=arch.simple_layer,
    target=arch.readout,
))

results = arch.buildrun()

Device Modes

briar supports four device modes:

from briar import SimpleComplex, NaturalImageTask, SimConfig

task = NaturalImageTask(n_patterns=500, image_size=32)

# C++ standalone (default) -- compiled to native code
arch = SimpleComplex(task)
results = arch.buildrun()

# Runtime -- interpreted, no compilation, useful for quick tests
sim = SimConfig(use_cpp=False)
arch = SimpleComplex(task, sim=sim)
results = arch.buildrun()

# CUDA standalone -- compiled to GPU code (requires brian2cuda on Linux)
sim = SimConfig(use_cuda=True)
arch = SimpleComplex(task, sim=sim)
results = arch.buildrun()

# CUDA without reduce -- skip .cu file combining (for debugging)
sim = SimConfig(use_cuda=True, cuda_reduce=False)
arch = SimpleComplex(task, sim=sim)
results = arch.buildrun()

C++ and CUDA modes use incremental compilation -- the first build is slow, but subsequent runs reuse the compiled binary and finish in seconds.

Architectures

Simple

Feedforward-only: LGN ON/OFF inputs connect to simple cells via STDP. Tests whether feedforward learning alone can produce orientation selectivity with phase diversity.

input_pool (LGN ON/OFF) -> simple_layer (STDP)
from briar import Simple, RetinalWaveTask

task = RetinalWaveTask(n_waves=200, image_size=32)
arch = Simple(task, eta_ff_simple=5e-4)
results = arch.buildrun()

SimpleComplex

Two-layer architecture inspired by Antolik & Bednar (2011) with dendritic predictive coding from Mikulasch et al. (2021). Simple cells learn from LGN input via somatic STDP; complex cells learn from simple cells via dendritic predictive coding. Includes Mexican hat recurrent connections and feedback.

input_pool (LGN ON/OFF) -> simple_layer (somatic, STDP)
simple_layer -> complex_layer (dendritic, predictive coding)
complex_layer -> complex_layer (Mexican hat recurrent)
complex_layer -> simple_layer (Mexican hat feedback, fixed)
from briar import SimpleComplex, NaturalImageTask

task = NaturalImageTask(n_patterns=500, image_size=32)
arch = SimpleComplex(task)
results = arch.buildrun()

EfficientEncoder

Single-layer efficient coding model with feedforward predictive coding, Hebbian recurrent connections, and a decoder for reconstruction loss monitoring.

input_pool -> layer1 (dendritic, predictive coding)
layer1 -> layer1 (dendritic, Hebbian recurrent)
layer1 -> decoder (reconstruction loss)
from briar import EfficientEncoder, BarTask

task = BarTask(image_size=8, n_patterns=1000)
arch = EfficientEncoder(task)
results = arch.buildrun()

Custom

A blank architecture with no pre-defined pools. Build any network from scratch by adding pools and wiring them manually:

from briar import (
    Custom, NeuronPool, SynapsePool, PoissonPool,
    DecoderPool, SimConfig, PlasticityRule,
)
from briar.tasks import BarTask
from briar.datastructures import Compartment, DendriticRule

task = BarTask(image_size=8, n_patterns=500)
arch = Custom(task)

# Input layer
arch.add(PoissonPool(name='input', task=task))

# Hidden layer
arch.add(NeuronPool(name='hidden', n_neurons=16))

# Feedforward synapses with STDP
arch.add(SynapsePool(
    name='ff',
    source=arch.input,
    target=arch.hidden,
    eta=5e-4,
    plasticity_rule=PlasticityRule.STDP,
))

# Spike monitor
arch.hidden.add_monitor('spikes')

results = arch.buildrun()

Parameter Sweeps

Run multiple experiments varying one or more parameters:

from briar import SimpleComplex, NaturalImageTask, sweep

task = NaturalImageTask(n_patterns=500)

# Sweep a single parameter
results = sweep(
    SimpleComplex, task, use_cpp=False,
    eta_ff_simple=[5e-5, 5e-4, 5e-3, 5e-2, 5e-1],
)

# Mix fixed overrides with swept parameters
results = sweep(
    SimpleComplex, task,
    ff_complex_radius=6.0,                          # scalar -> fixed
    eta_ff_simple=[5e-5, 5e-4, 5e-3, 5e-2, 5e-1],  # list -> swept
)

# Multi-parameter grid (Cartesian product)
results = sweep(
    SimpleComplex, task,
    eta_ff_simple=[1e-4, 1e-3],
    eta_ff_complex=[5e-5, 5e-4],
)  # 4 runs total

Comparison Plots

Compare results from a sweep (or manually loaded pickles) side-by-side:

from briar import plot_raster_comparison, plot_rate_ridgeline, Results

# Stacked raster plots
plot_raster_comparison(results, layer='simple')
plot_raster_comparison(results, layer='complex')

# Ridgeline firing rate distributions
plot_rate_ridgeline(results, layer='simple')
plot_rate_ridgeline(results, layer='complex')

# Works with manually loaded results too
r1 = Results.load('dumps/SimpleComplex/NaturalImageTask/20260304_091654.pkl')
r2 = Results.load('dumps/SimpleComplex/NaturalImageTask/20260304_131307.pkl')
plot_raster_comparison([r1, r2], labels=['eta=1e-4', 'eta=1e-3'])

Testing

# All tests
pytest

# Fast tests only - runtime mode, fastest in sequential (not parallel)
pytest -m "not slow" --override-ini="addopts="

# Slow tests only - cpp_standalone compile+run
pytest -m slow

# Slow test *examples* for a specific architecture
pytest -m encoder
pytest -m "slow and simple_complex"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

briar-0.2.0.tar.gz (163.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

briar-0.2.0-py3-none-any.whl (132.1 kB view details)

Uploaded Python 3

File details

Details for the file briar-0.2.0.tar.gz.

File metadata

  • Download URL: briar-0.2.0.tar.gz
  • Upload date:
  • Size: 163.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for briar-0.2.0.tar.gz
Algorithm Hash digest
SHA256 f8b36888e4fd9d1e2c53a3801670995bd225a4c78520cb96caf72f749cabd8ec
MD5 2e9616297907789f8157579ecd66fc86
BLAKE2b-256 562caceab330e9e71c5c0dc9ac148cce90d0e4b3c6f57a160aaade9c41b67d5f

See more details on using hashes here.

File details

Details for the file briar-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: briar-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 132.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for briar-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 805944ccfac037cfca1055dcfa69641d452126948f60337d5d07c62f0a1df914
MD5 a6a3b26f3c862d6b7fbbaf965c543b66
BLAKE2b-256 7479418287d69361beb2e89b0e0a5910eb206872fa47ac6a9cab9a5cc0476e4c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page