[Paper] Supervised Spike Agreement Dependent Plasticity for Fast Local Learning in Spiking Neural Networks

Published: (January 13, 2026 at 08:09 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.08526v1

Overview

The paper presents Supervised Spike Agreement‑Dependent Plasticity (S‑SADP), a new learning rule for spiking neural networks (SNNs) that replaces the classic pairwise spike‑timing updates of STDP with a population‑level agreement measure (e.g., Cohen’s kappa). By doing so, the authors achieve fast, fully local weight updates that work in a supervised setting without resorting to back‑propagation, surrogate gradients, or teacher‑forcing tricks. The method is demonstrated on hybrid CNN‑SNN pipelines and shows competitive accuracy on standard vision benchmarks while staying compatible with neuromorphic hardware constraints.

Key Contributions

  • Supervised extension of SADP – Introduces a label‑driven, agreement‑based plasticity rule that preserves strict synaptic locality.
  • Linear‑time complexity – Updates depend only on the current spike counts of pre‑ and post‑synaptic populations, avoiding the quadratic cost of pairwise STDP.
  • Hardware‑friendly design – Works with binary spike events and simple statistics, making it suitable for emerging neuromorphic chips.
  • Hybrid CNN‑SNN architecture – Combines a conventional convolutional encoder (producing compact feature maps) with a downstream SNN trained by S‑SADP.
  • Extensive empirical validation – Shows competitive performance on MNIST, Fashion‑MNIST, CIFAR‑10, and several biomedical image classification tasks, with fast convergence and robustness to hyper‑parameter variations.
  • Compatibility with device‑level dynamics – Demonstrates that the rule can be implemented with realistic synaptic update mechanisms (e.g., conductance‑based or memristive devices).

Methodology

  1. Spike Agreement Metric – Instead of measuring the exact timing difference between two spikes, the rule computes an agreement score between the population of spikes emitted by a presynaptic neuron group and the spikes of a postsynaptic neuron over a learning window. Cohen’s kappa (or similar statistics) quantifies how much the two spike trains “agree” beyond chance.

  2. Supervised Signal – The target label is encoded as a desired spike pattern (e.g., a one‑hot Poisson train). The agreement between the actual output spikes and the target pattern drives weight updates.

  3. Local Weight Update – For each synapse (w_{ij}):
    [ \Delta w_{ij} = \eta , \big( \kappa_{ij}^{\text{output}} - \kappa_{ij}^{\text{target}} \big) ]
    where (\kappa_{ij}) is the agreement score computed from the spike counts of neuron (i) (pre) and neuron (j) (post). The update uses only locally available spike counts, preserving biological plausibility.

  4. Hybrid Pipeline

    • CNN encoder processes raw images and outputs a low‑dimensional feature map.
    • Poisson conversion turns each feature value into a spike train (rate‑coded).
    • S‑SADP‑trained SNN receives these spikes, learns to map them to the target class pattern, and finally produces a decision via a simple read‑out (e.g., spike count per output neuron).
  5. Training Loop – No back‑propagation through time. Each training sample triggers a forward pass, agreement computation, and a single synaptic update per connection, yielding linear‑time scaling with the number of synapses.

Results & Findings

DatasetBaseline (STDP / Surrogate‑BP)S‑SADP (this work)Convergence (epochs)
MNIST98.2 % (BP)98.0 %12
Fashion‑MNIST89.5 % (BP)89.2 %15
CIFAR‑1071.3 % (BP)70.8 %20
Biomedical (retina)94.1 % (BP)93.7 %10
  • Accuracy: Within 0.5 % of state‑of‑the‑art surrogate‑gradient SNNs on all benchmarks.
  • Speed: Converges 2–3× faster than conventional STDP because updates are aggregated over the whole population rather than per spike pair.
  • Stability: Performance remains stable across a wide range of learning rates (10⁻⁴–10⁻²) and kappa thresholds, indicating low sensitivity to hyper‑parameters.
  • Hardware alignment: Simulations with conductance‑based synapse models show negligible degradation, confirming that the rule can be mapped onto memristive or CMOS neuromorphic devices.

Practical Implications

  • Edge AI & Low‑Power Devices – The rule’s locality and linear complexity make it ideal for on‑chip learning where memory bandwidth and energy are at a premium.
  • Fast On‑Device Adaptation – Because weight updates happen after a single forward pass, devices can adapt to new data (e.g., user‑specific gestures) in real time without costly back‑propagation cycles.
  • Simplified Toolchains – Developers can train SNNs using standard deep‑learning frameworks (the CNN encoder) and then switch to a lightweight, spike‑agreement module for the spiking part, avoiding custom gradient implementations.
  • Robustness to Timing Noise – Since the rule does not rely on precise spike timing, it tolerates jitter and hardware variability, a common issue in analog neuromorphic chips.
  • Potential for Continual Learning – The agreement metric can be recomputed on‑the‑fly for new classes, enabling incremental updates without catastrophic forgetting.

Limitations & Future Work

  • Rate‑Coding Dependency – The current implementation relies on Poisson rate coding of CNN features; exploring temporal coding schemes could further improve efficiency.
  • Scalability to Very Deep SNNs – Experiments were limited to shallow spiking layers; extending S‑SADP to deeper hierarchical SNNs remains an open challenge.
  • Theoretical Guarantees – While empirical results are strong, a formal convergence analysis of agreement‑based plasticity is still lacking.
  • Hardware Prototyping – The paper validates the rule in simulation; a future hardware prototype on a neuromorphic chip would solidify its practical viability.

Overall, supervised SADP offers a compelling bridge between biologically inspired learning and the practical demands of modern AI hardware, opening a path for fast, local, and hardware‑friendly training of spiking neural networks.

Authors

  • Gouri Lakshmi S
  • Athira Chandrasekharan
  • Harshit Kumar
  • Muhammed Sahad E
  • Bikas C Das
  • Saptarshi Bej

Paper Information

  • arXiv ID: 2601.08526v1
  • Categories: cs.NE, cs.LG
  • Published: January 13, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »