[Paper] Improving Low-Latency Learning Performance in Spiking Neural Networks via a Change-Perceptive Dendrite-Soma-Axon Neuron
Source: arXiv - 2512.16259v1
Overview
The paper introduces a new spiking neuron design—the Change‑Perceptive Dendrite‑Soma‑Axon (CP‑DSA) neuron—that tackles two long‑standing bottlenecks in Spiking Neural Networks (SNNs): the loss of information caused by hard resets and the oversimplified neuron models that ignore dendritic processing. By adding a soft‑reset mechanism and a “change‑perceptive” signal that focuses on the difference between successive membrane potentials, the authors achieve markedly lower latency (fewer time steps) while preserving the energy‑efficiency advantages of SNNs.
Key Contributions
- Soft‑reset Dendrite‑Soma‑Axon (DSA) neuron: Extends the classic leaky‑integrate‑and‑fire (LIF) model with learnable dendritic, somatic, and axonal parameters, widening the neuron’s expressive capacity.
- Change‑Perceptive (CP) mechanism: Introduces a lightweight operation that feeds the temporal change of membrane potential into the neuron, enabling accurate inference with very short simulation windows.
- Theoretical analysis: Provides proofs that the CP‑DSA dynamics preserve information flow and that the added parameters are identifiable and beneficial for learning.
- Comprehensive empirical evaluation: Benchmarks CP‑DSA on several vision and neuromorphic datasets (e.g., CIFAR‑10, CIFAR‑100, DVS‑Gesture), showing state‑of‑the‑art accuracy with 2‑4× fewer time steps compared to prior SNNs.
- Open‑source implementation: The authors release code and pretrained models, facilitating reproducibility and rapid adoption by the community.
Methodology
-
Neuron Architecture
- Dendrite stage: Applies a learnable linear transformation to incoming spikes, mimicking synaptic weighting.
- Soma stage: Integrates the dendritic current into a membrane potential, but instead of a hard reset (setting the potential to zero after a spike), it uses a soft reset that subtracts the threshold value, preserving residual voltage.
- Axon stage: Generates the output spike using a surrogate gradient‑based spiking function, allowing back‑propagation through time (BPTT).
-
Change‑Perceptive (CP) Signal
- At each simulation step t, the neuron computes ΔVₜ = Vₜ – Vₜ₋₁ (the change in membrane potential).
- ΔVₜ is fed back as an additional input to the dendritic stage, effectively telling the neuron “how much the state moved since the last tick.”
- This simple difference operation is cheap (one subtraction) but dramatically improves the network’s ability to detect rapid patterns, which is crucial for low‑latency inference.
-
Training Pipeline
- Networks built from CP‑DSA neurons are trained end‑to‑end with surrogate gradients (e.g., the piecewise linear surrogate).
- Standard data augmentations and Adam optimizer are used; the authors also propose a schedule for gradually increasing the simulation length during training to stabilize learning.
-
Evaluation Protocol
- Experiments compare CP‑DSA against baseline LIF SNNs, ANN‑to‑SNN conversion methods, and recent biologically‑inspired neuron models.
- Metrics include classification accuracy, number of time steps (latency), and estimated energy consumption (based on spike counts).
Results & Findings
| Dataset | Time Steps (T) | CP‑DSA Accuracy | Best Prior SNN Accuracy | Δ Accuracy | Spike‑Count Reduction |
|---|---|---|---|---|---|
| CIFAR‑10 | 4 | 93.2 % | 90.5 % (T=8) | +2.7 % | ~45 % |
| CIFAR‑100 | 6 | 71.8 % | 68.1 % (T=12) | +3.7 % | ~48 % |
| DVS‑Gesture | 5 | 98.1 % | 96.4 % (T=10) | +1.7 % | ~50 % |
- Latency reduction: CP‑DSA reaches comparable or better accuracy with half or fewer simulation steps, directly translating to faster inference on neuromorphic hardware.
- Energy efficiency: Because spikes are the primary energy cost, the lower spike count per inference yields an estimated 30‑50 % energy saving over conventional hard‑reset LIF networks.
- Ablation studies show that both the soft reset and the CP mechanism contribute independently; removing either degrades performance by 2‑4 % on average.
- Parameter analysis reveals that the learned dendritic and axonal scaling factors adapt to dataset characteristics, confirming the model’s ability to discover useful internal representations.
Practical Implications
- Edge AI & IoT devices: Developers can deploy SNN‑based classifiers that react within a few microseconds, ideal for low‑power sensors, robotics, and wearables where latency and battery life are critical.
- Neuromorphic hardware compatibility: The CP‑DSA neuron maps cleanly onto existing event‑driven chips (e.g., Intel Loihi, IBM TrueNorth) because it only adds simple arithmetic (subtraction) and a few extra parameters—no exotic operations.
- Rapid prototyping: With the open‑source code, engineers can swap a standard LIF layer for CP‑DSA in PyTorch or TensorFlow‑compatible SNN frameworks, gaining immediate latency benefits without redesigning the whole network.
- Hybrid ANN‑SNN pipelines: The soft‑reset dynamics make it easier to convert pretrained ANNs into SNNs, because the residual membrane potential preserves information that would otherwise be lost during conversion.
- Real‑time event processing: Applications such as gesture recognition, autonomous driving perception, and spike‑based audio processing can now achieve higher accuracy with fewer timesteps, reducing the overall system latency.
Limitations & Future Work
- Hardware‑specific tuning: While the CP‑DSA neuron is hardware‑friendly, optimal performance still depends on the underlying neuromorphic platform’s support for on‑chip subtraction and parameter storage.
- Scalability to very deep networks: The paper evaluates up to 10‑layer SNNs; extending CP‑DSA to very deep architectures (e.g., ResNet‑like SNNs) may require additional regularization to prevent gradient explosion.
- Temporal dynamics beyond classification: The current work focuses on static image and short‑duration event classification. Applying CP‑DSA to longer sequential tasks (e.g., speech or video) remains an open question.
- Theoretical bounds: Although the authors provide convergence arguments, tighter bounds on how the CP mechanism influences learning dynamics would strengthen the theoretical foundation.
Future research directions suggested by the authors include: integrating biologically plausible plasticity rules (e.g., STDP) with CP‑DSA, exploring adaptive timestep schedules that automatically stop inference once confidence is high, and benchmarking on larger neuromorphic datasets such as N‑Caltech101 or real‑world robotics pipelines.
Authors
- Zeyu Huang
- Wei Meng
- Quan Liu
- Kun Chen
- Li Ma
Paper Information
- arXiv ID: 2512.16259v1
- Categories: cs.NE
- Published: December 18, 2025
- PDF: Download PDF