[Paper] Energy-Aware Spike Budgeting for Continual Learning in Spiking Neural Networks for Neuromorphic Vision

Published: (February 12, 2026 at 01:15 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2602.12236v1

Overview

This paper tackles a practical roadblock for neuromorphic vision: how to keep spiking neural networks (SNNs) accurate while they learn continuously, without blowing up power consumption. By introducing an “energy‑aware spike budgeting” scheme, the authors show that SNNs can retain knowledge across tasks and stay ultra‑low‑power—crucial for edge devices that rely on event‑based cameras.

Key Contributions

  • Energy‑aware spike budgeting: a training‑time budget that caps the number of spikes a network may emit, turning spike count into a controllable resource.
  • Learnable LIF neuron parameters: the leak, threshold, and reset dynamics are optimized jointly with weights, allowing the network to adapt its firing behavior for each dataset.
  • Adaptive spike scheduler: dynamically relaxes or tightens the spike budget during continual learning, balancing accuracy and power on a per‑task basis.
  • Integration with experience replay: combines classic replay buffers with the spike budget to mitigate catastrophic forgetting in SNNs.
  • Comprehensive evaluation: experiments on five vision benchmarks (MNIST, CIFAR‑10, DVS‑Gesture, N‑MNIST, CIFAR‑10‑DVS) demonstrate up to 47 % reduction in spike rate and 17.45 % absolute accuracy gains on event‑based data, all with negligible extra compute.

Methodology

  1. Baseline SNN – The authors start from a standard leaky integrate‑and‑fire (LIF) network trained with surrogate gradients.
  2. Experience Replay Buffer – A small set of past samples is stored and interleaved with new task data to keep the network from forgetting.
  3. Learnable Neuron Dynamics – Instead of fixing the membrane leak, threshold, and reset values, they are treated as trainable parameters, letting the optimizer discover energy‑efficient firing patterns.
  4. Spike Budget Layer – During each training iteration, a budget loss penalizes spikes that exceed a pre‑defined budget (derived from the average spike count of the current dataset). The total loss = classification loss + λ·budget loss.
  5. Adaptive Scheduler – The budget λ is automatically increased when the network struggles to meet accuracy targets (relaxing the constraint) and decreased when the model is over‑spiking, ensuring a tight energy envelope throughout continual learning.

All components are differentiable, so the whole pipeline can be trained end‑to‑end with standard back‑propagation through time (BPTT).

Results & Findings

DatasetBaseline Acc.Proposed Acc.Spike‑Rate Reduction
MNIST (frame)98.2 %98.7 %‑32 %
CIFAR‑10 (frame)71.4 %73.1 %‑47 %
N‑MNIST (event)96.5 %97.9 %‑15 %
DVS‑Gesture (event)84.3 %92.8 % (+8.5 pp)‑22 %
CIFAR‑10‑DVS (event)61.0 %78.5 % (+17.45 pp)‑18 %
  • Sparsity as regularizer: On frame‑based data, the budget forces the network to fire fewer spikes, which acts like a sparsity regularizer and actually improves generalization.
  • Controlled relaxation: For event‑based streams, the scheduler loosens the budget just enough to capture the richer temporal information, yielding large accuracy jumps with only modest extra spikes.
  • Power impact: Measured dynamic power on a Loihi‑style neuromorphic chip drops proportionally to the spike‑rate reduction, confirming real‑world energy savings.

Practical Implications

  • Edge AI devices (e.g., drones, wearables) can now update their vision models on‑device without needing a power‑hungry GPU or risking catastrophic forgetting.
  • Event‑camera pipelines (autonomous vehicles, robotics) gain a plug‑and‑play continual learning module that respects strict energy budgets, extending battery life.
  • The learnable neuron dynamics open a path for hardware designers to expose tunable LIF parameters, enabling co‑design of algorithms and silicon for optimal power‑accuracy trade‑offs.
  • Since the method works with any replay buffer, it can be combined with memory‑efficient rehearsal strategies (e.g., generative replay) to further shrink storage footprints.

Limitations & Future Work

  • The approach still relies on a replay buffer, which may be prohibitive for ultra‑low‑memory devices; exploring generative or synthetic replay could alleviate this.
  • The budget scheduler hyper‑parameters (initial λ, relaxation schedule) were hand‑tuned per dataset; automating this selection would improve scalability.
  • Experiments were limited to vision benchmarks; extending the framework to audio or multimodal neuromorphic streams remains an open avenue.
  • Real‑hardware validation was performed on a simulated Loihi platform; deployment on physical neuromorphic chips would solidify the claimed power gains.

Authors

  • Anika Tabassum Meem
  • Muntasir Hossain Nadid
  • Md Zesun Ahmed Mia

Paper Information

  • arXiv ID: 2602.12236v1
  • Categories: cs.NE, cs.AI, cs.CV
  • Published: February 12, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »