[Paper] DendroNN: Dendrocentric Neural Networks for Energy-Efficient Classification of Event-Based Data

Published: (March 10, 2026 at 02:59 AM EDT)
4 min read
Source: arXiv

Source: arXiv - 2603.09274v1

Overview

The paper introduces DendroNN, a new class of spiking neural networks that mimic the way biological dendrites detect specific spike sequences. By turning dendritic sequence detection into a trainable, event‑driven architecture, the authors achieve high‑accuracy classification of event‑based data while dramatically cutting energy consumption—making it a promising candidate for low‑power neuromorphic hardware.

Key Contributions

  • Dendrocentric network design: A novel spiking architecture that treats dendritic branches as sequence detectors, turning temporal spike patterns into discriminative features.
  • Gradient‑free rewiring training: A biologically inspired “rewiring” phase that learns which spike sequences to keep or discard, enabling training without back‑propagation through non‑differentiable spikes.
  • Dynamic/static sparsity exploitation: The network naturally prunes unused dendritic branches, yielding both static (structural) and dynamic (activation‑time) sparsity.
  • Asynchronous digital hardware prototype: Introduces a “time‑wheel” event‑driven processor that eliminates per‑step global updates, a common bottleneck in recurrent or delay‑based SNNs.
  • Energy‑efficiency results: Demonstrates up to 4× lower energy per inference compared to state‑of‑the‑art neuromorphic platforms on audio event classification, with comparable accuracy.

Methodology

  1. Dendritic Sequence Detection: Each dendritic branch monitors incoming spikes and fires only when a predefined temporal order (e.g., spike A → spike B → spike C within a time window) occurs. This creates a set of spatiotemporal “motifs” that act as high‑level features.
  2. Rewiring Phase:
    • Memorization: During an unsupervised exposure to training data, the network records frequently observed spike sequences.
    • Pruning: Sequences that never contribute to correct class decisions are removed, reducing the number of active dendrites.
    • This process is akin to synaptic growth and elimination in biology and sidesteps the need for gradient descent on discrete spike events.
  3. Network Architecture: A shallow feed‑forward SNN where the first layer consists of dendritic detectors, followed by a simple read‑out layer that aggregates the binary outputs into class scores.
  4. Hardware Implementation: The authors design an asynchronous digital accelerator that uses a time‑wheel—a rotating pointer that timestamps incoming events—so that each dendrite updates only when its specific sequence pattern is matched, avoiding global clock cycles.

Results & Findings

  • Benchmark Datasets: Tested on several event‑based time‑series benchmarks (e.g., N‑MNIST, DVS‑Gesture, audio spike‑encoded speech).
  • Accuracy: Achieved classification scores within 1–3 % of the best recurrent or delay‑based SNNs, despite using a shallower, feed‑forward topology.
  • Energy Consumption: On an audio classification task, the DendroNN hardware consumed ~0.25 nJ per inference, roughly 4× less than leading neuromorphic chips (e.g., Loihi, TrueNorth) while delivering similar accuracy.
  • Sparsity Metrics: Static sparsity (pruned dendrites) reached ~70 % reduction in parameters; dynamic sparsity (event‑driven updates) yielded ~85 % fewer clock cycles per inference.

Practical Implications

  • Edge AI Devices: DendroNN’s event‑driven nature makes it ideal for battery‑powered sensors (audio wake‑words, event‑based cameras) where every microjoule counts.
  • Neuromorphic Accelerators: The time‑wheel architecture can be integrated into existing digital ASIC flows, offering a drop‑in replacement for recurrent SNN blocks without redesigning the whole pipeline.
  • Low‑Latency Processing: Because updates occur only on relevant spike patterns, inference latency scales with the actual information content, not with a fixed time step—beneficial for real‑time detection (e.g., gesture recognition).
  • Simplified Training Pipelines: The rewiring approach removes the need for surrogate gradient tricks, allowing developers to train models with standard event‑stream data pipelines and simple rule‑based pruning scripts.

Limitations & Future Work

  • Sequence Length Constraints: The current dendritic detectors are limited to relatively short spike patterns; longer temporal dependencies may still require recurrence or external memory.
  • Hardware Prototyping Scope: The hardware evaluation is based on a digital ASIC simulation; silicon validation and scaling to larger networks remain open.
  • Generalization to Vision: While audio and simple event‑based datasets are covered, applying DendroNN to high‑resolution event‑camera streams may demand more sophisticated dendritic encoding schemes.
  • Training Overhead: The rewiring phase can be computationally intensive for very large datasets, suggesting a need for more efficient online pruning algorithms.

Overall, DendroNN opens a fresh pathway for energy‑efficient spatiotemporal AI, bridging neuroscience insights with practical neuromorphic engineering.

Authors

  • Jann Krausse
  • Zhe Su
  • Kyrus Mama
  • Maryada
  • Klaus Knobloch
  • Giacomo Indiveri
  • Jürgen Becker

Paper Information

  • arXiv ID: 2603.09274v1
  • Categories: cs.LG, cs.AI, cs.AR, cs.ET, cs.NE
  • Published: March 10, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »