[Paper] DInf-Grid: A Neural Differential Equation Solver with Differentiable Feature Grids
Source: arXiv - 2601.10715v1
Overview
A new paper, DInf-Grid, proposes a fast, differentiable grid‑based representation for solving differential equations (DEs) with neural networks. By marrying the speed of feature‑grid encodings with an infinitely‑smooth radial‑basis‑function (RBF) interpolator, the authors achieve 5–20× faster training than traditional coordinate‑based MLP solvers while keeping accuracy and model size low.
Key Contributions
- Differentiable Feature Grids: Introduces a grid representation that can be differentiated to any order thanks to RBF interpolation, overcoming the derivative limits of prior grid‑based implicit models.
- Multi‑Resolution Co‑located Grids: A hierarchical decomposition that captures both low‑frequency trends and high‑frequency details, stabilizing global gradient computation.
- Implicit DE‑Driven Training: The network is trained directly from the governing differential equation (loss = residual of the DE), eliminating the need for ground‑truth data.
- Broad Validation Suite: Demonstrates the approach on Poisson (image reconstruction), Helmholtz (wave propagation), and Kirchhoff‑Love (cloth simulation) problems.
- Speed‑Accuracy Trade‑off: Shows 5–20× speed‑ups over sinusoidal MLP baselines while delivering comparable error metrics and a compact memory footprint.
Methodology
- Feature Grid Construction – The domain is discretized into a set of regular 3‑D (or 2‑D) grids. Each grid cell stores a low‑dimensional feature vector.
- RBF Interpolation – When evaluating the solution at an arbitrary coordinate, the surrounding grid features are blended using a radial basis function (e.g., Gaussian). Because RBFs are smooth, any derivative of the interpolated field can be computed analytically.
- Multi‑Resolution Stack – Several grids at different resolutions are stacked. Coarse grids capture global structure; fine grids add high‑frequency corrections. All grids are aligned (co‑located) so that their contributions can be summed efficiently.
- Loss from the DE – The network’s output field (u(\mathbf{x})) is plugged into the target differential operator (e.g., (\nabla^2 u = f) for Poisson). The residual (| \mathcal{L}[u] - f |^2) forms the training loss, together with boundary‑condition penalties.
- Optimization – Standard stochastic gradient descent (Adam) updates the grid feature vectors. No explicit MLP weights are involved, so each iteration is cheap and memory‑light.
Results & Findings
| Task | Baseline (Sinusoidal MLP) | DInf‑Grid | Speed‑up | Relative Error |
|---|---|---|---|---|
| Poisson (256×256 image) | 3 min, 0.0012 MSE | 12 s, 0.0013 MSE | ~15× | ≈ 1% |
| Helmholtz (3‑D wave) | 7 min, 0.0045 MSE | 30 s, 0.0047 MSE | ~14× | ≈ 4% |
| Kirchhoff‑Love (cloth) | 5 min, 0.0028 MSE | 18 s, 0.0030 MSE | ~17× | ≈ 7% |
Key takeaways
- Training time drops from minutes to seconds for typical grid sizes, enabling rapid prototyping.
- Model size shrinks (a few MB of grid features vs. tens of MB for deep MLPs).
- Accuracy remains on par with state‑of‑the‑art coordinate‑based solvers, even on high‑frequency wave fields.
Practical Implications
- Fast Physics‑in‑the‑Loop: Engineers can embed DInf‑Grid solvers into simulation pipelines (e.g., real‑time cloth or fluid pre‑conditioning) without the latency of traditional neural PDE solvers.
- Edge Deployment: The lightweight grid representation fits comfortably on GPUs or even mobile NPUs, opening doors for on‑device scientific inference (e.g., AR apps that need quick wave‑field estimation).
- Data‑Free Training: Since the loss is derived from the governing equations, developers can train models directly from problem specifications, bypassing costly data collection.
- Hybrid Rendering: In graphics, DInf‑Grid can replace expensive Poisson‑based image‑based lighting solves, delivering comparable lighting fields in seconds.
- Rapid Design Iteration: Product teams can tweak boundary conditions or source terms and re‑solve instantly, accelerating design cycles for acoustics, optics, or structural analysis.
Limitations & Future Work
- Grid Resolution Dependency: Extremely fine features still require higher‑resolution grids, which increase memory usage. Adaptive or sparse grids could mitigate this.
- Boundary Complexity: Handling highly irregular or moving boundaries needs additional encoding strategies (e.g., signed‑distance fields).
- Scalability to Very High Dimensions: The current formulation is demonstrated up to 3‑D; extending to 4‑D (spatio‑temporal) problems may need hierarchical or factorized grid schemes.
- Theoretical Guarantees: While empirical error is low, formal convergence proofs for the RBF‑grid combination remain an open research direction.
Bottom line: DInf‑Grid shows that you can get the best of both worlds—grid‑level speed and neural‑level flexibility—for differential equation solving. For developers building physics‑aware tools, it offers a practical, high‑performance alternative to heavyweight MLP‑based solvers.
Authors
- Navami Kairanda
- Shanthika Naik
- Marc Habermann
- Avinash Sharma
- Christian Theobalt
- Vladislav Golyanik
Paper Information
- arXiv ID: 2601.10715v1
- Categories: cs.LG
- Published: January 15, 2026
- PDF: Download PDF