[Paper] Spatial Spiking Neural Networks Enable Efficient and Robust Temporal Computation

Published: (December 10, 2025 at 02:01 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.10011v1

Overview

The paper proposes Spatial Spiking Neural Networks (SpSNNs), a new way to handle synaptic delays in spiking neural networks by embedding neurons in a physical space. Instead of learning a separate delay for every connection, the network learns each neuron’s coordinates, and delays emerge automatically from the Euclidean distances between them. This reduces the number of trainable parameters dramatically while preserving (and even improving) temporal processing performance.

Key Contributions

  • Spatial embedding of neurons: Introduces a framework where delays are derived from inter‑neuron distances in a low‑dimensional Euclidean space (2‑D/3‑D), eliminating per‑synapse delay parameters.
  • Parameter‑efficiency: Shows up to 18× fewer parameters compared with conventional SNNs that learn unconstrained delays, without sacrificing accuracy.
  • Geometric regularization: Empirically demonstrates that networks confined to 2‑D or 3‑D spaces outperform those with “infinite‑dimensional” delay vectors, suggesting that spatial constraints act as a useful regularizer.
  • Dynamic sparsification: Proposes a sparsity‑aware training regime that can prune up to 90 % of connections while retaining full task performance.
  • Hardware‑friendly design: Argues that the learned spatial layout maps naturally onto neuromorphic chips (e.g., crossbars, mesh networks), enabling low‑latency, low‑energy implementations.
  • Generalizable gradient computation: Derives exact delay gradients using automatic differentiation with custom rules, making the approach compatible with any spiking neuron model or network architecture.

Methodology

  1. Neuron coordinate learning – Each neuron (i) is assigned a learnable vector (\mathbf{p}_i \in \mathbb{R}^d) (typically (d=2) or (3)).
  2. Delay extraction – The synaptic delay between neuron (i) and neuron (j) is computed as
    [ \tau_{ij} = \alpha , |\mathbf{p}_i - \mathbf{p}_j|_2, ]
    where (\alpha) is a scaling factor that converts distance to time‑steps.
  3. Training loop – The network is trained end‑to‑end using back‑propagation through time (BPTT). Custom autograd rules propagate gradients through the distance‑based delay function, allowing the coordinates to be updated jointly with the usual weight parameters.
  4. Sparsity schedule – During training a sparsity mask is gradually applied, zeroing out the smallest‑magnitude weights. The mask is updated periodically, enabling the network to adapt to the reduced connectivity.
  5. Benchmarks – Experiments are run on two temporal classification tasks:
    • Yin‑Yang (a synthetic spatio‑temporal pattern classification).
    • Spiking Heidelberg Digits (SHD) (speech‑like spike trains).

All experiments compare SpSNNs against baseline SNNs that learn an independent delay per synapse.

Results & Findings

MetricBaseline SNN (unconstrained delays)SpSNN (2‑D)SpSNN (3‑D)
Parameter count~1.2 M~70 k (≈ 18× less)~90 k
Yin‑Yang accuracy96.3 %98.1 %97.9 %
SHD accuracy84.2 %86.5 %86.2 %
Sparsity toleranceDegrades > 70 % pruningMaintains accuracy up to 90 % pruningSame
  • Performance boost: Despite the massive reduction in parameters, SpSNNs consistently achieve higher classification accuracy on both benchmarks.
  • Dimensional sweet spot: 2‑D and 3‑D embeddings outperform higher‑dimensional delay vectors, indicating that a modest spatial structure provides enough expressive power while acting as a regularizer.
  • Robustness to pruning: When 90 % of synapses are removed, the sparsified SpSNN still matches the dense baseline, confirming that the spatial representation concentrates essential information.

Practical Implications

  1. Neuromorphic hardware alignment – Since delays are now a function of physical distance, a chip can directly map neuron coordinates onto its layout, eliminating the need for per‑synapse delay storage and lookup tables. This can cut memory bandwidth and energy consumption dramatically.
  2. Scalable edge AI – Developers building low‑power sensors (e.g., event‑based cameras, audio spikes) can deploy SpSNNs with a tiny memory footprint, making real‑time temporal inference feasible on micro‑controllers or ASICs.
  3. Simplified model deployment – Training a single set of coordinates per neuron is far easier to serialize and version‑control than millions of delay parameters, easing CI/CD pipelines for spiking models.
  4. Transferable spatial priors – The learned geometry can be visualized and potentially reused across tasks (e.g., a “spatial embedding” learned on speech could seed a new model for gesture recognition).
  5. Compatibility with existing frameworks – The authors provide custom autograd rules that plug into PyTorch‑like environments, so developers can experiment without rewriting low‑level simulation code.

Limitations & Future Work

  • Fixed scaling factor – The conversion from distance to delay ((\alpha)) is kept constant; adaptive scaling could further improve flexibility.
  • Assumption of Euclidean space – Real neuromorphic fabrics may have irregular routing constraints; exploring non‑Euclidean or graph‑based embeddings is an open direction.
  • Benchmark scope – Experiments focus on classification of relatively short spike trains; evaluating on longer, hierarchical temporal tasks (e.g., language modeling) would test scalability.
  • Hardware validation – While the paper argues for hardware friendliness, a concrete implementation on a neuromorphic chip (e.g., Loihi, BrainChip) is left for future work.

Bottom line: Spatial Spiking Neural Networks turn the “delay” problem into a geometry problem, slashing parameter counts, boosting accuracy, and paving the way for more energy‑efficient neuromorphic systems. For developers eyeing real‑time, low‑power temporal AI, SpSNNs offer a compelling, hardware‑aware alternative to traditional delay‑heavy SNNs.

Authors

  • Lennart P. L. Landsmeer
  • Amirreza Movahedin
  • Mario Negrello
  • Said Hamdioui
  • Christos Strydis

Paper Information

  • arXiv ID: 2512.10011v1
  • Categories: cs.NE, q-bio.NC
  • Published: December 10, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »