[Paper] Physics-Informed Neural Networks for Device and Circuit Modeling: A Case Study of NeuroSPICE

Published: (December 29, 2025 at 12:28 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.23624v1

Overview

The paper introduces NeuroSPICE, a novel framework that replaces the classic SPICE numerical solvers with physics‑informed neural networks (PINNs) to simulate electronic devices and circuits. By embedding the circuit’s differential‑algebraic equations directly into a neural network’s loss function, NeuroSPICE can generate waveforms and their exact time derivatives, opening new pathways for design‑space exploration and inverse‑problem solving.

Key Contributions

  • PINN‑based circuit solver: Formulates the circuit DAE residual as a loss that is minimized via back‑propagation, eliminating the need for traditional time‑stepping schemes.
  • Analytical time‑domain waveforms: The network outputs closed‑form expressions for voltages/currents, providing exact temporal derivatives for downstream tasks.
  • Surrogate modeling for optimization: Demonstrates how the trained PINN can act as a fast, differentiable surrogate for device‑level and circuit‑level design optimization.
  • Support for emerging, highly nonlinear devices: Shows feasibility on ferroelectric memory cells, which are challenging for conventional SPICE due to strong non‑linearity and hysteresis.
  • Open‑source case study (NeuroSPICE): Provides a reproducible implementation that can be extended to other device models and circuit topologies.

Methodology

  1. Circuit Formulation: The authors start from the standard Modified Nodal Analysis (MNA) representation, which yields a set of DAEs describing the circuit dynamics.
  2. Neural Network Architecture: A fully‑connected feed‑forward network takes time t as input and outputs the vector of node voltages and branch currents.
  3. Physics‑Informed Loss
    • The network’s automatic‑differentiation engine computes exact time derivatives of its outputs.
    • These derivatives are substituted back into the DAEs, producing a residual vector.
    • The loss is the mean‑squared residual across a set of collocation points (sampled times).
  4. Training Loop: Using stochastic gradient descent (Adam), the network parameters are updated to drive the residual toward zero. No labeled simulation data are required—only the governing equations.
  5. Surrogate Use: Once trained, the network can be queried at arbitrary times, and its differentiable nature enables gradient‑based optimization or inverse design (e.g., finding device parameters that achieve a target waveform).

Results & Findings

BenchmarkSPICE (reference)NeuroSPICE (trained PINN)Observations
Simple RC low‑pass filterAccurate, <1 ms runtimeComparable waveform shape, ~10× slower during training, but inference <0.1 msAccuracy on par after convergence; training cost is the main overhead
Ferroelectric memory cell (nonlinear hysteresis)Convergence issues, requires tiny timestepsStable training, captures hysteresis loop accuratelyPINN handles strong nonlinearity without fiddling with solver tolerances
Design‑space sweep (device capacitance)Repeated SPICE runs neededSingle trained PINN used for gradient‑based sweep, 5‑10× speed‑upDemonstrates surrogate advantage

Overall, NeuroSPICE does not beat SPICE in raw simulation speed or out‑of‑the‑box accuracy, but it provides exact analytical waveforms and a differentiable surrogate that can be reused across many design iterations.

Practical Implications

  • Rapid Design Optimization: Engineers can embed the trained PINN into a gradient‑descent loop to tune device parameters (e.g., threshold voltage, capacitance) without re‑running a full SPICE simulation each iteration.
  • Inverse Modeling & Parameter Extraction: Given a measured waveform, back‑propagation through the PINN can infer underlying device characteristics—a valuable tool for characterization labs.
  • Modeling Emerging Devices: For novel components (memristors, ferroelectric FETs, quantum‑dot devices) where SPICE models are immature, a physics‑based PINN can be built directly from the governing equations, accelerating prototyping.
  • Hardware‑Accelerated Simulation: Because the inference step is just a forward pass through a neural net, it can be offloaded to GPUs, TPUs, or even edge ASICs, enabling real‑time circuit emulation in hardware‑in‑the‑loop testing.
  • Educational & Research Use: The analytical nature of the output makes it easier to visualize and differentiate circuit behavior, aiding teaching and exploratory research.

Limitations & Future Work

  • Training Overhead: Converging the PINN to SPICE‑level accuracy can require thousands of epochs, making the upfront cost higher than a one‑off SPICE run.
  • Scalability: The study focuses on small‑to‑moderate sized circuits; extending to large analog/RF blocks may demand more sophisticated architectures (e.g., graph neural networks) or domain decomposition.
  • Accuracy Guarantees: While the loss enforces the governing equations, numerical errors can still accumulate, especially near stiff regions or discontinuities. Formal error bounds were not provided.
  • Future Directions: The authors suggest exploring adaptive collocation strategies, hybrid PINN‑SPICE solvers (using PINN as a surrogate only where SPICE struggles), and integrating learned device models into standard EDA toolchains.

Authors

  • Chien‑Ting Tung
  • Chenming Hu

Paper Information

  • arXiv ID: 2512.23624v1
  • Categories: cs.AI, physics.app-ph
  • Published: December 29, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »