[Paper] Learning to Solve PDEs on Neural Shape Representations

Published: (December 24, 2025 at 01:14 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.21311v1

Overview

The paper introduces a mesh‑free neural solver that can directly tackle surface partial differential equations (PDEs) on modern neural shape representations (e.g., signed distance fields, occupancy networks, neural radiance fields). By learning a local update operator conditioned on the neural geometry itself, the method eliminates the need to extract a polygonal mesh before solving PDEs, opening the door to truly end‑to‑end pipelines for graphics, simulation, and engineering tasks.

Key Contributions

  • Neural‑domain PDE operator: A learned, locality‑preserving update rule that works on the implicit neural description of a surface, without any intermediate meshing step.
  • One‑shot training, broad generalization: The operator is trained once on a single representative shape and then transfers to unseen shapes with different geometries and topologies.
  • Differentiable pipeline: The solver remains fully differentiable, enabling gradient‑based optimization of downstream tasks (e.g., shape design, inverse problems).
  • Competitive accuracy & speed: Benchmarks on the heat equation and Poisson problem show performance on par with classical FEM and a slight edge over the recent CPM method, while being orders of magnitude faster at inference.
  • Unified framework: Works for both neural surfaces (SDFs, occupancy, NeRF‑derived meshes) and traditional triangle meshes, providing the first end‑to‑end solution that bridges the two worlds.

Methodology

  1. Local Neural Feature Extraction – For any query point on the surface, the method samples the underlying neural field (e.g., evaluates the SDF and its gradient) to obtain a low‑dimensional descriptor that captures local geometry.
  2. Conditioned Update Operator – A lightweight neural network (typically an MLP with residual connections) takes the local descriptor and the current PDE field value, and predicts a correction term. This mimics a single iteration of a classic iterative solver (e.g., Jacobi or Gauss‑Seidel) but is learned from data.
  3. Iterative Refinement – Starting from an initial guess (often zero), the operator is applied repeatedly until convergence criteria are met. Because the operator is learned, convergence is typically reached in far fewer iterations than a hand‑crafted scheme.
  4. Training Regime – The network is supervised with ground‑truth solutions obtained from a high‑quality FEM solver on a single training shape. Losses enforce both pointwise accuracy and smoothness of the predicted field.
  5. Generalization Mechanism – Since the operator only depends on local geometric descriptors, it naturally adapts to new shapes and even changes in topology (e.g., adding holes) without retraining.

Results & Findings

BenchmarkMetricNeural SolverCPM (baseline)FEM (gold)
Heat equation on sphereL2 error0.0120.0150.010
Poisson on sphereL2 error0.0180.0220.016
Real neural assets (SDF & occupancy)Visual fidelity (PSNR)31.4 dB30.9 dB31.8 dB
Inference time (per 10k points)ms81245 (CPU FEM)
  • The learned operator consistently matches or slightly outperforms CPM while staying within a few percent of FEM accuracy.
  • Inference is 5–6× faster than running a full FEM solve on the same hardware, and ≈2× faster than CPM.
  • Qualitative visualizations show smooth temperature/pressure fields on complex neural shapes (e.g., a neural‑generated chair) without any visible artifacts from mesh extraction.

Practical Implications

  • End‑to‑end differentiable simulation: Engineers can now embed heat diffusion, electrostatic potential, or fluid surface pressure directly into neural‑based design loops, enabling gradient‑based shape optimization without costly remeshing.
  • Rapid prototyping for AR/VR assets: Content creators using neural implicit models can instantly evaluate physical effects (e.g., lighting diffusion, sound propagation) on their assets, accelerating iteration cycles.
  • Reduced memory & preprocessing overhead: Large‑scale scenes that would be prohibitive to mesh (due to billions of triangles) can be processed directly in the neural domain, saving both storage and preprocessing time.
  • Cross‑representation pipelines: Studios that mix traditional meshes with neural assets can now apply a single PDE solver across the entire scene, simplifying pipelines for effects like global illumination or heat‑based deformation.
  • Potential for on‑device inference: The lightweight MLP operator can be run on GPUs or even mobile NPUs, making real‑time PDE‑driven effects feasible in interactive applications.

Limitations & Future Work

  • Training shape bias: While the operator generalizes well, extreme geometric variations (e.g., highly anisotropic features not seen in the training shape) can degrade accuracy.
  • Boundary condition handling: The current formulation focuses on Dirichlet/Neumann conditions that are easy to encode locally; more complex mixed or time‑varying boundaries need additional mechanisms.
  • Scalability to volumetric PDEs: The method is tailored to surface PDEs; extending it to full 3‑D volumetric domains (e.g., solid mechanics) remains an open challenge.
  • Theoretical convergence guarantees: As a learned iterative scheme, formal proofs of convergence rates are lacking; future work could blend classical numerical analysis with learning‑based updates for stronger guarantees.

Overall, this work bridges a critical gap between the rise of neural implicit geometry and the longstanding need for PDE‑based analysis, offering a practical, fast, and differentiable tool that can be immediately leveraged by developers building the next generation of graphics and simulation systems.

Authors

  • Lilian Welschinger
  • Yilin Liu
  • Zican Wang
  • Niloy Mitra

Paper Information

  • arXiv ID: 2512.21311v1
  • Categories: cs.LG
  • Published: December 24, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »