[Paper] Neuromorphic Parameter Estimation for Power Converter Health Monitoring Using Spiking Neural Networks

Published: (April 17, 2026 at 01:34 AM EDT)
4 min read
Source: arXiv

Source: arXiv - 2604.15714v1

Overview

This paper presents a neuromorphic approach to health‑monitoring of power converters that can run continuously on sub‑milliwatt edge devices. By combining a spiking neural network (SNN) with a differentiable physics model, the authors achieve accurate estimation of passive component values while cutting inference energy by two orders of magnitude compared with conventional GPU‑based neural nets.

Key Contributions

  • Hybrid SNN‑Physics Training: Introduces a three‑layer leaky‑integrate‑and‑fire (LIF) SNN that learns to infer component parameters, while a separate differentiable ODE solver enforces the underlying converter physics during training.
  • Energy‑Efficient Inference: Demonstrates ~270× lower energy consumption on neuromorphic hardware (Intel Loihi 2 / BrainChip Akida) versus a standard feed‑forward network.
  • Improved Parameter Accuracy: Reduces lumped resistance estimation error from 25.8 % to 10.2 %, bringing it inside the ±10 % manufacturing tolerance of real components.
  • Event‑Driven Fault Detection: Leverages persistent membrane states to detect abrupt faults as a 5.5 pp jump in spike rate, enabling on‑chip degradation tracking.
  • High Spike Sparsity: Achieves 93 % spike sparsity, meaning most neurons stay silent most of the time—crucial for ultra‑low‑power operation.

Methodology

  1. Data Generation: Simulated a synchronous buck converter under normal operation and under electromagnetic‑interference (EMI) disturbances. The ground‑truth passive component values (inductance, capacitance, series resistance) were known.
  2. Spiking Network Architecture: A shallow SNN (input → hidden → output) of LIF neurons processes the time‑series voltage/current measurements. The network’s membrane potentials evolve over discrete time steps, producing sparse spike trains.
  3. Physics‑Consistent Loss: Instead of back‑propagating through the spiking dynamics (which is costly), the authors decouple the physics loss. They feed the SNN’s output parameters into a differentiable ODE solver that simulates the converter’s behavior; the discrepancy between simulated and measured signals forms the physics loss.
  4. Training Loop: The total loss = (a) classification/regression loss on the parameters + (b) physics loss from the ODE solver. Gradients flow through the ODE solver to update the SNN weights, while the spiking loop remains unrolled only for forward inference.
  5. Hardware Mapping: The trained SNN was compiled for Intel Loihi 2 and BrainChip Akida, exploiting their event‑driven compute model and on‑chip learning primitives.

Results & Findings

MetricFeed‑forward NNProposed SNN
Lumped resistance error25.8 %10.2 %
Energy per inference (µJ)~1.5~0.005 (≈ 270× reduction)
Spike sparsityN/A93 %
Fault detection (spike‑rate jump)+5.5 pp on abrupt fault
Parameter tolerance complianceOutside ±10 %Inside ±10 %

The SNN not only meets the accuracy required for manufacturing tolerances but also provides a clear, event‑driven signal when a fault occurs, which can be used for immediate protective actions.

Practical Implications

  • Always‑On Edge Monitoring: Power‑electronics manufacturers can embed a tiny neuromorphic module inside converters (e.g., in laptops, EV chargers) to continuously assess health without draining the system battery.
  • Reduced BOM Cost: By using existing neuromorphic ASICs (Loihi 2, Akida) or even low‑power FPGAs that emulate LIF dynamics, the need for separate high‑power MCUs or GPUs disappears.
  • Predictive Maintenance Pipelines: The sparse spike stream can be directly fed into cloud‑based analytics or on‑device alert logic, enabling predictive replacement schedules.
  • Scalable to Other Converters: The hybrid SNN‑ODE framework is generic; swapping the ODE model lets developers apply the same pipeline to boost converters, inverters, or motor drives.
  • Event‑Driven Fault Isolation: The spike‑rate jump provides a binary, low‑latency trigger for safety circuits, simplifying firmware design for fault handling.

Limitations & Future Work

  • Simulation‑Centric Validation: Experiments were performed on simulated EMI‑corrupted data; real‑world hardware tests are needed to confirm robustness against sensor noise and temperature drift.
  • Model Generalization: The ODE solver is tightly coupled to the buck‑converter topology; extending to more complex multi‑phase converters will require richer physics models.
  • Training Overhead: Decoupling the physics loss reduces inference cost but still demands a differentiable ODE solver during training, which can be computationally intensive. Future work could explore surrogate physics losses or meta‑learning to speed up training.
  • Hardware Portability: While the paper targets Loihi 2 and Akida, mapping to other neuromorphic platforms (e.g., Intel’s upcoming Flex, ASIC‑level SNNs) may need additional quantization and timing adjustments.

Overall, the study demonstrates that spiking neural networks, when paired with physics‑aware training, can deliver accurate, ultra‑low‑power health monitoring for power converters—opening a practical path for always‑on intelligent power‑electronics.

Authors

  • Hyeongmeen Baik
  • Hamed Poursiami
  • Maryam Parsa
  • Jinia Roy

Paper Information

  • arXiv ID: 2604.15714v1
  • Categories: cs.NE, cs.LG, eess.SY
  • Published: April 17, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »