[Paper] Amortized Inference of Neuron Parameters on Analog Neuromorphic Hardware

Published: (February 11, 2026 at 06:49 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2602.10763v1

Overview

The paper presents a simulation‑based inference (SBI) pipeline that can quickly estimate the biophysical parameters of analog neuromorphic neurons—specifically the adaptive exponential integrate‑and‑fire (AdEx) model used on the BrainScaleS‑2 hardware platform. By training a neural density estimator once and re‑using it (amortized inference), the authors show that developers can obtain reliable posterior distributions for seven key neuron parameters without running costly per‑experiment simulations.

Key Contributions

  • Amortized SBI for analog neuromorphic hardware – a single trained model can infer parameters for any new observation in milliseconds.
  • Binary classifier pre‑filter – discards parameter settings that produce unrealistic spike counts, dramatically shrinking the search space.
  • Comparison of two density estimators:
    1. Hand‑crafted summary statistics (e.g., spike count, inter‑spike interval moments).
    2. Learned summary network jointly trained with the density estimator.
  • Demonstrated superior posterior quality with the learned summary network, yielding more accurate membrane‑potential dynamics.
  • Empirical validation on the BrainScaleS‑2 analog substrate, confirming that amortized inference can replace exhaustive parameter sweeps.

Methodology

  1. Simulation data generation – The authors simulated the AdEx neuron model across a wide range of the seven parameters, recording membrane potentials and spike trains for each configuration.
  2. Binary classification filter – A lightweight neural classifier was trained to predict whether a given parameter set would generate a “moderate” spike count (the regime of interest). Only the accepted simulations were kept for inference.
  3. Neural density estimation – Two approaches were explored:
    • Hand‑crafted summaries: a fixed vector of statistics (mean firing rate, variance of inter‑spike intervals, etc.) fed into a normalizing‑flow based density estimator.
    • Learned summaries: a small convolutional/temporal network that ingests the raw voltage trace and learns a compact representation jointly with the density estimator.
  4. Amortized inference – The trained density estimator learns a mapping from observed traces (or their summaries) to a posterior distribution over the seven parameters. At test time, inference is a single forward pass.
  5. Posterior predictive checks – Samples drawn from the posterior are fed back into the neuron simulator to generate synthetic traces, which are then compared to the original observation.

Results & Findings

AspectHand‑crafted summariesLearned summary network
Posterior focusBroad, multimodalTight, unimodal around true values
Predictive trace fidelityCaptures spike‑count statistics but deviates in fine‑grained voltage dynamicsReplicates both spike statistics and sub‑threshold membrane fluctuations
CalibrationSlight bias, over‑confident intervalsReduced bias, better calibrated credible intervals
Inference speed~5 ms per observation~3 ms per observation (network overhead negligible)

Overall, the learned summary network produced posterior samples whose simulated traces matched the target observations much more closely, confirming that the amortized SBI framework can recover the underlying neuron parameters with high fidelity.

Practical Implications

  • Rapid prototyping – Engineers can now tune analog neuromorphic circuits on‑the‑fly, swapping out parameter sets in milliseconds instead of hours of manual calibration.
  • Scalable hardware deployment – When scaling BrainScaleS‑2 or similar platforms to thousands of neurons, the amortized approach eliminates the need for per‑neuron exhaustive sweeps, saving both time and silicon resources.
  • Integration into toolchains – The method can be wrapped as a Python library (e.g., using sbi or torch), allowing developers to feed recorded voltage traces directly and obtain posterior samples for downstream tasks such as model‑based control or adaptive learning.
  • Cross‑platform applicability – Although demonstrated on BrainScaleS‑2, the pipeline is hardware‑agnostic; any analog neuromorphic substrate that can be simulated will benefit from the same amortized inference strategy.
  • Facilitates hybrid systems – Precise parameter estimates enable tighter coupling between analog neurons and digital learning algorithms (e.g., back‑propagation through time on a mixed‑signal system).

Limitations & Future Work

  • Bias & miscalibration – Both estimators exhibited some systematic bias, especially near the edges of the admissible parameter range; further calibration techniques (e.g., importance weighting) are needed.
  • Scope of neuron model – The study focused on the AdEx model; extending to more complex multi‑compartment or conductance‑based models may require richer summary networks.
  • Hardware noise – Real‑world analog noise and drift were only partially captured in the simulations; incorporating online adaptation could improve robustness.
  • Scalability of the classifier – The binary filter works well for moderate spike‑count regimes; future work should explore multi‑class or regression‑based filters for broader operating regimes.
  • Open‑source release – Providing the trained density estimator and summary network as a reusable package would accelerate adoption across the neuromorphic community.

Authors

  • Jakob Kaiser
  • Eric Müller
  • Johannes Schemmel

Paper Information

  • arXiv ID: 2602.10763v1
  • Categories: cs.NE
  • Published: February 11, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »