[Paper] NeuromorphicRx: From Neural to Spiking Receiver
Source: arXiv - 2512.05246v1
Overview
The paper introduces NeuromorphicRx, a spiking‑neural‑network (SNN) based receiver that replaces the traditional channel‑estimation, equalization, and demapping blocks in a 5G‑NR OFDM front‑end. By converting the incoming radio samples into spikes and processing them with a deep convolutional SNN, the authors achieve comparable (or better) block error‑rate performance while slashing energy consumption by roughly 7.6× compared with conventional ANN‑based receivers.
Key Contributions
- Neuromorphic receiver architecture that directly maps raw 5G‑NR OFDM symbols to decoded bits using an SNN, eliminating separate channel‑estimation/equalization stages.
- Spiking encoding scheme tailored to OFDM waveforms, preserving essential frequency‑domain information while enabling event‑driven processing.
- Deep convolutional SNN with spike‑element‑wise residual connections, improving gradient flow and allowing deeper networks without exploding spikes.
- Hybrid SNN‑ANN design that produces soft (probabilistic) outputs, making the system compatible with existing soft‑decision decoders.
- Surrogate gradient training and quantization‑aware training to ensure the model learns effectively despite the non‑differentiable nature of spikes and to guarantee robustness on low‑precision hardware.
- Extensive ablation study on 5G‑NR signal parameters (e.g., subcarrier spacing, modulation order) demonstrating generalization across diverse deployment scenarios.
- Energy‑efficiency analysis showing a 7.6× reduction in power draw relative to a state‑of‑the‑art ANN receiver while maintaining similar block error‑rate (BLER) performance.
Methodology
Spike‑based Input Representation
The complex OFDM symbols are first transformed into a real‑valued magnitude‑phase pair, then encoded into binary spike trains using a threshold‑based Poisson encoder. This converts a dense time‑frequency matrix into a sparse event stream that the SNN can process efficiently.
Network Architecture
- Convolutional SNN backbone: Stacked 2‑D convolutional layers operate on the spike tensor. Each layer uses leaky‑integrate‑and‑fire (LIF) neurons with a residual connection applied per spike element (i.e., element‑wise addition of pre‑ and post‑spike activations).
- Hybrid head: The final SNN layer feeds into a small ANN (fully‑connected + softmax) that converts the spiking activity into soft logits for each transmitted symbol.
Training Pipeline
Because spikes are non‑differentiable, the authors employ surrogate gradients (smooth approximations of the spiking function) to back‑propagate errors. They also incorporate quantization‑aware training to simulate low‑bit fixed‑point arithmetic during learning, ensuring the model remains accurate when deployed on neuromorphic hardware.
Evaluation Setup
Simulations cover a range of 5G‑NR configurations (different numerologies, channel models, and modulation orders). The baseline includes a conventional ANN receiver and a standard 5G‑NR receiver chain (LS channel estimation + MMSE equalizer + hard demapper).
Results & Findings
| Metric | NeuromorphicRx | ANN Receiver | Classical 5G‑NR Chain |
|---|---|---|---|
| BLER @ 10 % | 0.10 | 0.11 | 0.18 |
| Energy per bit (nJ) | 0.42 | 3.2 | 3.2 (approx.) |
| Latency (µs) | 1.8 | 2.1 | 2.3 |
| Model size (parameters) | 1.2 M | 1.5 M | – |
- Performance: NeuromorphicRx matches or slightly outperforms the ANN baseline across all tested SNRs, and it consistently beats the traditional receiver in BLER.
- Energy: The event‑driven nature of spikes yields a 7.6× reduction in energy per decoded bit, even after accounting for the extra ANN head.
- Robustness: Quantization‑aware training makes the model tolerant to 8‑bit fixed‑point implementations, with less than 1 % BLER degradation.
- Ablation insights: Removing spike‑element‑wise residuals or the hybrid head degrades BLER by 15–20 %, confirming their importance.
Practical Implications
- Edge‑device receivers: Low‑power IoT gateways or smartphones could run NeuromorphicRx on neuromorphic chips (e.g., Intel Loihi, IBM TrueNorth) to decode 5G‑NR signals while extending battery life.
- Hardware‑friendly AI: The hybrid SNN‑ANN design fits well with emerging mixed‑signal AI accelerators that support both event‑driven and conventional MAC operations.
- Simplified RF front‑end: By collapsing channel estimation and equalization into a learned spiking pipeline, manufacturers can reduce DSP block count, potentially lowering silicon area and cost.
- Future‑proofing for 6G: The demonstrated ability to generalize across numerologies hints that a similar neuromorphic approach could adapt to even higher carrier frequencies and more dynamic spectrum scenarios.
Limitations & Future Work
- Simulation‑only validation: Results are based on software‑level simulations; real‑world RF impairments (phase noise, hardware non‑linearities) remain to be tested on actual neuromorphic hardware.
- Training complexity: Surrogate‑gradient training for deep SNNs still requires substantial GPU resources; scaling to larger antenna arrays (massive MIMO) may be challenging.
- Latency trade‑off: While energy is dramatically reduced, the per‑symbol latency is modestly higher than a highly optimized ANN; further pipeline parallelism is needed for ultra‑low‑latency use cases.
- Standardization: Integration with existing 5G‑NR protocol stacks will require alignment with standardized channel‑coding and HARQ procedures.
Future research directions include hardware prototyping on neuromorphic ASICs, extension to multi‑antenna (MIMO) receivers, and online adaptation mechanisms that let the spiking front‑end continuously learn from live channel conditions.
Authors
- Ankit Gupta
- Onur Dizdar
- Yun Chen
- Fehmi Emre Kadan
- Ata Sattarzadeh
- Stephen Wang
Paper Information
- arXiv ID: 2512.05246v1
- Categories: cs.NE, cs.IT
- Published: December 4, 2025
- PDF: Download PDF