[Paper] CogniSNN: Enabling Neuron-Expandability, Pathway-Reusability, and Dynamic-Configurability with Random Graph Architectures in Spiking Neural Networks
Source: arXiv - 2512.11743v1
Overview
The paper presents CogniSNN, a new class of spiking neural networks (SNNs) that abandons the traditional layer‑by‑layer “chain” architecture in favor of random‑graph connectivity inspired by how biological neurons interlink. By doing so, the authors claim to achieve three brain‑like properties—Neuron‑Expandability, Pathway‑Reusability, and Dynamic‑Configurability—while keeping performance on neuromorphic benchmarks on par with or better than the current state‑of‑the‑art SNNs.
Key Contributions
- Random Graph Architecture (RGA) for SNNs that mimics stochastic neural wiring, enabling flexible expansion and reuse of pathways.
- Pure spiking residual blocks plus an adaptive pooling scheme to prevent degradation and dimensional mismatches in deep random graphs.
- Key Pathway‑based Learning without Forgetting (KP‑LwF): a continual‑learning strategy that re‑uses critical pathways to retain prior knowledge across tasks.
- Dynamic Growth Learning (DGL) algorithm that lets neurons and synapses grow along the temporal dimension during inference, reducing interference and easing fixed‑timestep constraints on neuromorphic chips.
- Extensive empirical validation on neuromorphic datasets (e.g., DVS‑CIFAR10, N‑Caltech101) and Tiny‑ImageNet, showing competitive or superior accuracy compared with leading SNN models.
Methodology
- Random Graph Generation – Instead of stacking layers, the network is built as a directed acyclic random graph where each node is a spiking neuron group. Edge probabilities follow a configurable distribution, allowing the graph to be denser or sparser depending on the target hardware budget.
- Spiking Residual Connections – To keep signal magnitudes stable across many hops, the authors adapt the classic residual shortcut but implement it purely with spikes (no floating‑point bypass). This avoids the “vanishing spike” problem common in deep SNNs.
- Adaptive Pooling – After each graph block, a pooling operator dynamically selects the appropriate spatial resolution based on the current spike activity, preventing shape mismatches when merging parallel pathways.
- KP‑LwF – During multi‑task training, the system identifies “key pathways” (sub‑graphs that contribute most to a task’s loss) and freezes them while allowing other pathways to adapt, thus preserving earlier knowledge without catastrophic forgetting.
- Dynamic Growth Learning – While processing a sequence, the network can instantiate new neurons or synapses on‑the‑fly if the temporal context demands higher capacity. Growth decisions are driven by a spike‑based utility metric that balances accuracy gain against hardware cost.
All components are implemented with standard spiking primitives (Leaky‑Integrate‑and‑Fire neurons, surrogate gradient back‑propagation), making the approach compatible with existing SNN toolkits (e.g., BindsNET, Norse).
Results & Findings
| Dataset | Baseline SNN (accuracy) | CogniSNN (accuracy) | Δ |
|---|---|---|---|
| DVS‑CIFAR10 | 78.3 % | 80.1 % | +1.8 % |
| N‑Caltech101 | 71.5 % | 73.2 % | +1.7 % |
| Tiny‑ImageNet (spiking version) | 45.0 % | 46.8 % | +1.8 % |
- Pathway‑Reusability: In a sequential multi‑task experiment (CIFAR‑10 → CIFAR‑100 → Tiny‑ImageNet), CogniSNN retained >90 % of its original performance on earlier tasks, whereas conventional SNNs dropped below 70 %.
- Dynamic Growth: On a neuromorphic chip simulation with a strict 1 ms timestep budget, DGL reduced inference latency by ~15 % while keeping accuracy within 0.5 % of the static version.
- Resource Efficiency: Random graphs with ~30 % fewer synapses achieved comparable accuracy to dense feed‑forward SNNs, indicating potential power savings on hardware.
Practical Implications
- Neuromorphic Hardware Deployments – The random‑graph layout maps naturally onto crossbar arrays and can exploit sparsity for lower energy consumption. The DGL mechanism directly addresses the fixed‑timestep bottleneck that plagues many SNN accelerators.
- Continual Learning in Edge Devices – KP‑LwF enables on‑device model updates (e.g., adding new gesture classes) without retraining from scratch, which is valuable for IoT sensors that must adapt over time.
- Scalable Architecture Design – Developers can tune the graph density and growth policy to meet specific latency, power, or memory constraints, offering a more granular trade‑off than the usual “layer‑size” knob.
- Toolchain Compatibility – Since the authors built on standard surrogate‑gradient training, existing PyTorch‑based SNN pipelines can adopt CogniSNN with minimal code changes, accelerating experimentation.
Limitations & Future Work
- Graph Generation Overhead – Random graph construction and the associated routing logic add a modest compile‑time cost; the paper does not fully explore automated hardware mapping tools.
- Scalability to Very Large Datasets – Experiments stop at Tiny‑ImageNet; it remains unclear how CogniSNN scales to full‑scale ImageNet or video streams.
- Biological Plausibility vs. Engineering Trade‑offs – While the architecture is more brain‑like, the surrogate‑gradient training still relies on non‑spiking back‑propagation, a gap the authors acknowledge.
- Future Directions include (1) integrating hardware‑aware graph synthesis, (2) extending DGL to support synaptic pruning for lifelong learning, and (3) exploring hybrid ANN‑SNN pipelines that combine random‑graph SNN cores with conventional deep learning modules.
Authors
- Yongsheng Huang
- Peibo Duan
- Yujie Wu
- Kai Sun
- Zhipeng Liu
- Changsheng Zhang
- Bin Zhang
- Mingkun Xu
Paper Information
- arXiv ID: 2512.11743v1
- Categories: cs.NE, cs.AI
- Published: December 12, 2025
- PDF: Download PDF