[Paper] Privacy in Federated Learning with Spiking Neural Networks

Published: (November 26, 2025 at 03:55 AM EST)
3 min read
Source: arXiv

Source: arXiv - 2511.21181v1

Overview

This paper investigates whether spiking neural networks (SNNs)—the low‑power, event‑driven models popular for edge AI—offer any built‑in privacy benefits when used with federated learning (FL). By adapting state‑of‑the‑art gradient‑inversion attacks to the spike domain, the authors show that gradients from SNNs leak far less usable information than those from conventional artificial neural networks (ANNs).

Key Contributions

  • First systematic benchmark of gradient‑inversion attacks on SNNs across image, audio, and time‑series datasets.
  • Adaptation of several attack pipelines (e.g., Deep Leakage from Gradients, iDLG) to work with surrogate‑gradient training used in SNNs.
  • Empirical evidence that SNN gradients produce noisy, temporally inconsistent reconstructions that fail to recover meaningful spatial or temporal structure.
  • Insightful analysis linking the event‑driven dynamics and surrogate‑gradient training to reduced gradient informativeness.
  • Open‑source code and reproducible experiment suite for the community.

Methodology

  1. Model & Training Setup – The authors train SNNs using common surrogate‑gradient methods (e.g., Back‑Propagation Through Time with a piecewise‑linear surrogate). Baselines include equivalent ANN architectures trained with standard back‑propagation.
  2. Attack Adaptation – Gradient‑inversion attacks that normally operate on continuous ANN gradients are re‑implemented to handle the discrete spike tensors and surrogate gradients of SNNs.
  3. Datasets – Experiments span three domains: (a) static images (MNIST, CIFAR‑10), (b) speech commands (Google Speech Commands), and (c) sensor time‑series (UCI HAR).
  4. Evaluation Metrics – Reconstruction quality is measured with PSNR/SSIM for images, waveform similarity for audio, and classification‑accuracy of a downstream “re‑identified” model for time‑series.
  5. Comparison – For each dataset, the same federated learning round is simulated for both ANN and SNN participants, and the attacks are run on the shared gradients.

Results & Findings

  • Image domain: ANN gradients allow near‑perfect visual recovery (average SSIM ≈ 0.85), while SNN gradients yield blurry, fragmented reconstructions (SSIM ≈ 0.15) that lack recognizable objects.
  • Audio domain: Reconstructed waveforms from ANN gradients preserve phoneme structure; SNN reconstructions are dominated by noise, with <10 % intelligibility.
  • Time‑series domain: Attack on ANN gradients can infer activity labels with >70 % accuracy; SNN gradients drop this to near‑random (~20 %).
  • Why it works: Surrogate gradients are only loosely correlated with the underlying spike events, and the temporal sparsity of spikes introduces additional randomness, making the gradient signal less informative.

Practical Implications

  • Edge‑AI deployments: Engineers can consider SNNs not only for energy efficiency but also as a privacy‑enhancing layer when using FL, reducing the risk of data leakage without extra cryptographic overhead.
  • Federated learning frameworks: Existing FL toolkits (e.g., TensorFlow Federated, PySyft) could expose a “spiking mode” that automatically switches to surrogate‑gradient training, offering a low‑cost privacy boost.
  • Regulatory compliance: For applications subject to GDPR or HIPAA, the inherent privacy advantage of SNNs may simplify compliance audits for on‑device learning.
  • Design trade‑offs: Developers must balance the modest accuracy gap that SNNs sometimes exhibit against the privacy gain; the paper shows that for many edge tasks the gap is negligible.

Limitations & Future Work

  • The study focuses on surrogate‑gradient training; alternative SNN training schemes (e.g., ANN‑to‑SNN conversion) were not evaluated for privacy.
  • Experiments are limited to single‑round gradient sharing; multi‑round FL dynamics and aggregation strategies could affect leakage.
  • Only standard benchmark datasets were used; real‑world proprietary data (e.g., medical imaging) may exhibit different leakage patterns.
  • Future research directions include: formalizing privacy guarantees for SNNs, combining SNNs with differential privacy or secure aggregation, and exploring hardware‑level attacks on neuromorphic chips.

Authors

  • Dogukan Aksu
  • Jesus Martinez del Rincon
  • Ihsen Alouani

Paper Information

  • arXiv ID: 2511.21181v1
  • Categories: cs.LG, cs.AI, cs.DC
  • Published: November 26, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »