[Paper] Physics-Informed Neural Networks for Device and Circuit Modeling: A Case Study of NeuroSPICE
Source: arXiv - 2512.23624v1
Overview
The paper introduces NeuroSPICE, a novel framework that replaces the classic SPICE numerical solvers with physics‑informed neural networks (PINNs) to simulate electronic devices and circuits. By embedding the circuit’s differential‑algebraic equations directly into a neural network’s loss function, NeuroSPICE can generate waveforms and their exact time derivatives, opening new pathways for design‑space exploration and inverse‑problem solving.
Key Contributions
- PINN‑based circuit solver: Formulates the circuit DAE residual as a loss that is minimized via back‑propagation, eliminating the need for traditional time‑stepping schemes.
- Analytical time‑domain waveforms: The network outputs closed‑form expressions for voltages/currents, providing exact temporal derivatives for downstream tasks.
- Surrogate modeling for optimization: Demonstrates how the trained PINN can act as a fast, differentiable surrogate for device‑level and circuit‑level design optimization.
- Support for emerging, highly nonlinear devices: Shows feasibility on ferroelectric memory cells, which are challenging for conventional SPICE due to strong non‑linearity and hysteresis.
- Open‑source case study (NeuroSPICE): Provides a reproducible implementation that can be extended to other device models and circuit topologies.
Methodology
- Circuit Formulation: The authors start from the standard Modified Nodal Analysis (MNA) representation, which yields a set of DAEs describing the circuit dynamics.
- Neural Network Architecture: A fully‑connected feed‑forward network takes time t as input and outputs the vector of node voltages and branch currents.
- Physics‑Informed Loss
- The network’s automatic‑differentiation engine computes exact time derivatives of its outputs.
- These derivatives are substituted back into the DAEs, producing a residual vector.
- The loss is the mean‑squared residual across a set of collocation points (sampled times).
- Training Loop: Using stochastic gradient descent (Adam), the network parameters are updated to drive the residual toward zero. No labeled simulation data are required—only the governing equations.
- Surrogate Use: Once trained, the network can be queried at arbitrary times, and its differentiable nature enables gradient‑based optimization or inverse design (e.g., finding device parameters that achieve a target waveform).
Results & Findings
| Benchmark | SPICE (reference) | NeuroSPICE (trained PINN) | Observations |
|---|---|---|---|
| Simple RC low‑pass filter | Accurate, <1 ms runtime | Comparable waveform shape, ~10× slower during training, but inference <0.1 ms | Accuracy on par after convergence; training cost is the main overhead |
| Ferroelectric memory cell (nonlinear hysteresis) | Convergence issues, requires tiny timesteps | Stable training, captures hysteresis loop accurately | PINN handles strong nonlinearity without fiddling with solver tolerances |
| Design‑space sweep (device capacitance) | Repeated SPICE runs needed | Single trained PINN used for gradient‑based sweep, 5‑10× speed‑up | Demonstrates surrogate advantage |
Overall, NeuroSPICE does not beat SPICE in raw simulation speed or out‑of‑the‑box accuracy, but it provides exact analytical waveforms and a differentiable surrogate that can be reused across many design iterations.
Practical Implications
- Rapid Design Optimization: Engineers can embed the trained PINN into a gradient‑descent loop to tune device parameters (e.g., threshold voltage, capacitance) without re‑running a full SPICE simulation each iteration.
- Inverse Modeling & Parameter Extraction: Given a measured waveform, back‑propagation through the PINN can infer underlying device characteristics—a valuable tool for characterization labs.
- Modeling Emerging Devices: For novel components (memristors, ferroelectric FETs, quantum‑dot devices) where SPICE models are immature, a physics‑based PINN can be built directly from the governing equations, accelerating prototyping.
- Hardware‑Accelerated Simulation: Because the inference step is just a forward pass through a neural net, it can be offloaded to GPUs, TPUs, or even edge ASICs, enabling real‑time circuit emulation in hardware‑in‑the‑loop testing.
- Educational & Research Use: The analytical nature of the output makes it easier to visualize and differentiate circuit behavior, aiding teaching and exploratory research.
Limitations & Future Work
- Training Overhead: Converging the PINN to SPICE‑level accuracy can require thousands of epochs, making the upfront cost higher than a one‑off SPICE run.
- Scalability: The study focuses on small‑to‑moderate sized circuits; extending to large analog/RF blocks may demand more sophisticated architectures (e.g., graph neural networks) or domain decomposition.
- Accuracy Guarantees: While the loss enforces the governing equations, numerical errors can still accumulate, especially near stiff regions or discontinuities. Formal error bounds were not provided.
- Future Directions: The authors suggest exploring adaptive collocation strategies, hybrid PINN‑SPICE solvers (using PINN as a surrogate only where SPICE struggles), and integrating learned device models into standard EDA toolchains.
Authors
- Chien‑Ting Tung
- Chenming Hu
Paper Information
- arXiv ID: 2512.23624v1
- Categories: cs.AI, physics.app-ph
- Published: December 29, 2025
- PDF: Download PDF