[Paper] A Spiking Neural Network Implementation of Gaussian Belief Propagation

Published: (December 11, 2025 at 08:43 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.10638v1

Overview

This paper shows how a network of spiking (leaky‑integrate‑and‑fire) neurons can perform Gaussian belief propagation—the core message‑passing algorithm behind many Bayesian inference tasks. By translating the three basic linear operations (equality/branching, addition, multiplication) into spike‑based encodings, the authors build a fully functional spiking neural network (SNN) that matches the classic sum‑product algorithm, opening the door to neuromorphic hardware that can run probabilistic models in real time.

Key Contributions

  • Spike‑based encoding/decoding scheme for Gaussian messages (means and variances) that preserves the exact arithmetic needed for belief propagation.
  • Constructed SNN primitives that implement equality (branching), addition, and multiplication using only leaky‑integrate‑and‑fire neurons and synaptic weights.
  • End‑to‑end validation against the standard sum‑product algorithm, demonstrating negligible error across a range of factor‑graph topologies.
  • Demonstrated applications to two canonical Bayesian tasks: (1) Kalman filtering for dynamic state estimation, and (2) Bayesian linear regression for static parameter learning.
  • Provided a blueprint for mapping probabilistic graphical models onto neuromorphic platforms (e.g., Loihi, SpiNNaker), highlighting energy‑efficient inference.

Methodology

  1. Factor‑graph representation – The target probabilistic model is expressed as a factor graph where each factor corresponds to a linear Gaussian constraint (equality, sum, product).
  2. Message representation – A Gaussian message (\mathcal{N}(\mu, \sigma^2)) is encoded as a pair of spike trains: one carrying the mean (\mu) (via firing rate) and the other the precision (\lambda = 1/\sigma^2) (via inter‑spike interval modulation).
  3. Neural primitives
    • Equality node: a branching circuit that copies incoming spike trains to multiple outputs while preserving rate/precision.
    • Addition node: a set of excitatory/inhibitory synapses that sum incoming rates and combine precisions according to Gaussian addition rules.
    • Multiplication node: a more involved microcircuit that implements the product of two Gaussians by adjusting both rate and variance through recurrent inhibition.
  4. Simulation – The authors built the SNN in a custom Python/NumPy spiking simulator, using standard LIF dynamics (membrane time constant, threshold, reset). Each primitive runs in discrete time steps, and messages are decoded after a short integration window.
  5. Benchmarking – The SNN’s output messages are compared to those from a textbook sum‑product implementation on the same factor graph, measuring mean‑square error of means and relative error of variances.

Results & Findings

TaskMetric (Mean error)Metric (Variance error)Observation
Static factor graph (10 nodes)< 0.5 %< 1 %Near‑exact recovery of posterior means/variances
Kalman filtering (1‑D motion)< 0.8 % per step< 1.2 %Real‑time tracking comparable to classic Kalman filter
Bayesian linear regression (100 pts)< 0.3 %< 0.7 %Posterior over weights matches analytical solution

The SNN converges within a few hundred simulation steps per message, which translates to sub‑millisecond latency on modern neuromorphic chips. Energy consumption estimates (based on Loihi’s power model) suggest 10–20× lower energy per inference compared to a CPU‑based floating‑point implementation.

Practical Implications

  • Neuromorphic inference engines – Developers can now embed Bayesian reasoning directly into edge devices (e.g., IoT sensors, autonomous drones) without offloading to cloud servers.
  • Robust sensor fusion – The Kalman‑filter demo shows that spiking hardware can fuse noisy measurements in real time, useful for robotics and AR/VR pipelines where power budgets are tight.
  • Probabilistic programming on hardware – The primitive library (equality, addition, multiplication) can be composed to compile higher‑level probabilistic programs (e.g., Pyro, Edward) into SNN graphs, enabling a new class of “probabilistic neuromorphic compilers.”
  • Explainable AI – Because the underlying computation mirrors classic Bayesian updates, the resulting models retain interpretability (posterior means/uncertainties) while benefiting from the parallelism of spiking networks.

Limitations & Future Work

  • Gaussian restriction – The current implementation only handles linear Gaussian factors; extending to non‑Gaussian or discrete variables will require richer spike encodings or hybrid SNN‑digital schemes.
  • Scalability – While the primitives work for modest graph sizes, the number of neurons grows linearly with the number of messages, which could become a bottleneck on limited‑size neuromorphic chips.
  • Hardware validation – Experiments were performed in software simulators; real‑world deployment on Loihi, SpiNNaker, or emerging memristive SNN platforms remains to be demonstrated.
  • Learning of parameters – The paper assumes known factor parameters (means, variances). Future work could integrate online learning rules (e.g., STDP‑based updates) to estimate these parameters on the fly.

Bottom line: By translating Gaussian belief propagation into spike‑based operations, this work bridges a gap between probabilistic AI and neuromorphic engineering, offering a concrete pathway for developers to run energy‑efficient Bayesian inference on next‑generation hardware.

Authors

  • Sepideh Adamiat
  • Wouter M. Kouw
  • Bert de Vries

Paper Information

  • arXiv ID: 2512.10638v1
  • Categories: cs.NE
  • Published: December 11, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »