[Paper] Synchrony-Gated Plasticity with Dopamine Modulation for Spiking Neural Networks

Published: (December 8, 2025 at 01:10 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.07194v1

Overview

This paper proposes a new learning rule for deep spiking neural networks (SNNs) that blends biologically‑inspired dopamine‑modulated plasticity with a synchrony‑based signal. By turning raw spike timing into a compact “synchrony” metric, the authors can inject a local learning signal that is aware of the task loss, achieving modest but consistent accuracy improvements on several vision benchmarks without redesigning the network or optimizer.

Key Contributions

  • DA‑SSDP rule – a dopamine‑modulated spike‑synchrony‑dependent plasticity mechanism that gates local weight updates based on how synchrony correlates with the loss.
  • Batch‑level synchrony metric – collapses high‑resolution spike‑time logs into binary spike flags and first‑spike latencies, dramatically reducing memory overhead.
  • Warm‑up gating phase – a short initial training window that automatically determines whether synchrony is informative for the task; if not, the gate defaults to 1, turning the rule into a lightweight regularizer.
  • Seamless integration – DA‑SSDP can be added on top of any surrogate‑gradient back‑propagation pipeline; it only touches deeper layers after the standard gradient step.
  • Empirical gains – consistent accuracy lifts on CIFAR‑10 (+0.42 %), CIFAR‑100 (+0.99 %), CIFAR10‑DVS (+0.1 %), and ImageNet‑1K (+0.73 %) with only a modest increase in compute.
  • Open‑source implementation – code released at https://github.com/NeuroSyd/DA-SSDP.

Methodology

  1. Spike representation – For each neuron the authors store two pieces of information per batch:

    • a binary indicator (did the neuron fire at least once?)
    • the latency of the first spike, passed through a Gaussian kernel.
      This avoids keeping full spike‑time traces, cutting memory usage by orders of magnitude.
  2. Synchrony metric – Within a batch, the pairwise coincidence of spikes across neurons is summed, yielding a scalar “synchrony” value that reflects how many neurons fire together.

  3. Dopamine‑modulated gate – During a brief warm‑up (e.g., first few epochs) the correlation between synchrony and the task loss is measured. The gate (g) is set to the sign of this correlation (or to 1 if the correlation is near zero).

  4. Local weight update – After the usual surrogate‑gradient back‑propagation step, a second update is applied to deeper layers:

    [ \Delta w \propto g \times \text{(pre‑spike)} \times \text{(post‑spike)} \times \exp!\big(-\frac{(\text{latency})^2}{2\sigma^2}\big) ]

    When (g=1) the term reduces to a simple two‑factor rule (pre‑ and post‑spike coincidence weighted by latency), acting as a regularizer.

  5. Training pipeline – The method plugs into existing SNN training codebases that already use surrogate gradients; no changes to the loss function, optimizer, or network architecture are required.

Results & Findings

DatasetBaseline (surrogate‑grad)+DA‑SSDPΔ Accuracy
CIFAR‑1092.1 %92.52 %+0.42 %
CIFAR‑10071.3 %72.29 %+0.99 %
CIFAR10‑DVS73.5 %73.6 %+0.1 %
ImageNet‑1K71.8 %72.53 %+0.73 %
  • Memory footprint – Only binary spikes + first‑spike latencies are stored, so the extra memory is negligible compared to full spike‑time logs.
  • Compute overhead – Adding the synchrony calculation and the post‑backprop weight tweak adds ~5‑10 % extra FLOPs, which the authors deem acceptable for the accuracy boost.
  • Ablation – When the gate is forced to 1 (i.e., synchrony unrelated to loss), performance does not drop, confirming that the rule behaves as a benign regularizer.

Practical Implications

  • Plug‑and‑play regularizer for SNNs – Developers can adopt DA‑SSDP to squeeze a few extra points of accuracy from existing SNN models without redesigning the architecture or training loop.
  • Low‑memory training on edge devices – Because the method avoids storing dense spike‑time histories, it is suitable for on‑device learning where RAM is scarce (e.g., neuromorphic chips, low‑power IoT sensors).
  • Biologically plausible credit assignment – The dopamine‑gated synchrony signal mirrors neuromodulatory learning in the brain, opening doors for more interpretable or neuromorphic‑friendly training pipelines.
  • Potential for continual learning – The gate’s ability to “turn off” the synchrony influence when it’s unhelpful suggests a natural way to adapt plasticity during task shifts, a useful property for lifelong learning systems.

Limitations & Future Work

  • Modest gains – The reported improvements, while consistent, are relatively small (sub‑1 % on most benchmarks). For applications where every fraction of a percent matters, the extra compute may not be justified.
  • Warm‑up sensitivity – The initial gating phase relies on a short correlation estimate; if the loss landscape changes dramatically later in training, the gate may become sub‑optimal.
  • Scope of evaluation – Experiments focus on image classification; it remains to be seen how DA‑SSDP performs on other SNN tasks such as event‑based reinforcement learning, speech, or robotics control.
  • Hardware acceleration – While memory‑friendly, the synchrony computation and per‑batch gating are not yet mapped to existing neuromorphic hardware primitives; future work could explore ASIC/FPGA implementations.

Overall, DA‑SSDP offers a practical, biologically inspired tweak that can be dropped into current SNN training pipelines to gain a modest accuracy bump with minimal engineering effort.

Authors

  • Yuchen Tian
  • Samuel Tensingh
  • Jason Eshraghian
  • Nhan Duy Truong
  • Omid Kavehei

Paper Information

  • arXiv ID: 2512.07194v1
  • Categories: cs.NE
  • Published: December 8, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »