[Paper] Sleep-Based Homeostatic Regularization for Stabilizing Spike-Timing-Dependent Plasticity in Recurrent Spiking Neural Networks

Published: (January 13, 2026 at 06:17 AM EST)
3 min read
Source: arXiv

Source: arXiv - 2601.08447v1

Overview

The paper introduces a sleep‑inspired regularization for recurrent spiking neural networks (SNNs) that learn through spike‑timing‑dependent plasticity (STDP). By interleaving short “offline” periods—analogous to biological sleep—where synaptic weights decay toward a homeostatic baseline, the authors show that catastrophic weight blow‑up and forgetting can be dramatically reduced, leading to more stable learning on classic MNIST‑style tasks.

Key Contributions

  • Homeostatic sleep phase: A neuromorphic regularizer that mimics synaptic down‑scaling during offline periods, implemented as stochastic decay toward a target weight distribution.
  • Empirical validation: Demonstrates that 10‑20 % sleep time (relative to total training) stabilizes STDP‑driven recurrent SNNs on several MNIST‑derived benchmarks without any task‑specific hyper‑parameter tuning.
  • Contrast with gradient‑based SNNs: Shows that the same sleep protocol does not improve surrogate‑gradient SNNs (SG‑SNNs), highlighting a fundamental difference between local Hebbian learning and global gradient descent.
  • Biologically plausible memory consolidation: Uses spontaneous activity during the sleep phase to replay and reinforce learned patterns, echoing theories of memory replay in the brain.

Methodology

  1. Base model: A recurrent SNN trained with classic pair‑based STDP. Neurons emit binary spikes; synapses update based on the relative timing of pre‑ and post‑synaptic spikes.
  2. Sleep‑wake cycle:
    • Wake: Normal feed‑forward input from the training dataset, STDP updates applied continuously.
    • Sleep: External inputs are silenced. Synaptic weights are multiplied by a decay factor β ∈ (0,1) and nudged toward a predefined homeostatic mean μ using a small stochastic term.
    • Spontaneous activity: Random Poisson spikes are injected to generate internal dynamics, allowing the network to “replay” patterns and consolidate memories while the decay operates.
  3. Training schedule: The authors sweep sleep duration as a percentage of total training steps (0 % → 30 %). The optimal range (≈10‑20 %) is identified empirically.
  4. Baselines: Comparisons are made against (i) the same network without sleep, and (ii) a surrogate‑gradient SNN trained with back‑propagation through time (BPTT).

Results & Findings

ConditionTest Accuracy (MNIST‑like)Weight SaturationForgetting
STDP‑SNN, no sleep92.1 %High (many weights → 0 or 1)Significant drop after 50 k steps
STDP‑SNN, 15 % sleep94.8 %Low (weights stay near μ)Stable across full training
SG‑SNN, no sleep97.3 %Low (gradient clipping)Stable
SG‑SNN, 15 % sleep97.2 %No changeNo measurable benefit
  • Stability: Sleep phases keep the weight distribution centered around the homeostatic baseline, preventing runaway potentiation/depression.
  • Performance boost: A modest but consistent accuracy gain (≈2–3 %) over the non‑sleep baseline for STDP‑SNNs.
  • No effect on SG‑SNNs: Gradient‑based training already includes regularization mechanisms (e.g., weight decay), so the added sleep phase does not further improve performance.

Practical Implications

  • Neuromorphic hardware: Implementing a low‑overhead “sleep” routine (e.g., a brief period where inputs are gated off and a simple decay kernel runs) could dramatically improve the reliability of on‑chip STDP learning, extending device lifetime and reducing the need for hand‑tuned weight‑clipping.
  • Edge AI & low‑power devices: For battery‑constrained sensors that rely on local, unsupervised adaptation, a scheduled sleep interval (perhaps aligned with actual power‑saving sleep modes) offers a biologically inspired way to keep models from diverging.
  • Hybrid learning systems: The clear distinction between STDP‑friendly and gradient‑friendly regularization suggests designers should choose one paradigm per layer—or develop new interfaces that translate homeostatic decay into gradient‑compatible terms.
  • Continual learning: The sleep‑based consolidation mirrors replay mechanisms used in continual‑learning research, hinting at a lightweight alternative to explicit memory buffers for SNNs.

Limitations & Future Work

  • Benchmark scope: Experiments are limited to MNIST‑style image classification; more complex temporal tasks (e.g., speech or event‑based vision) remain untested.
  • Sleep schedule heuristics: The optimal sleep proportion is identified empirically; an adaptive schedule that reacts to weight statistics could be more robust.
  • Hardware validation: The study is simulation‑only; real‑world neuromorphic chips may exhibit additional constraints (e.g., quantization noise) that affect the decay dynamics.
  • Integration with gradients: While the sleep phase harms SG‑SNNs, the authors propose exploring joint regularizers that blend homeostatic decay with gradient‑based optimizers, a promising direction for hybrid learning architectures.

Authors

  • Andreas Massey
  • Aliaksandr Hubin
  • Stefano Nichele
  • Solve Sæbø

Paper Information

  • arXiv ID: 2601.08447v1
  • Categories: cs.NE, stat.ML
  • Published: January 13, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »