[Paper] The Adaptive Vekua Cascade: A Differentiable Spectral-Analytic Solver for Physics-Informed Representation
Source: arXiv - 2512.11776v1
Overview
Vladimer Khasia’s paper introduces the Adaptive Vekua Cascade (AVC), a novel neural‑network architecture that blends deep learning with classical spectral methods to represent physical fields. By separating the geometry‑learning step from the function‑approximation step, AVC overcomes the notorious “spectral bias” of coordinate‑based networks and slashes the massive parameter counts that plague high‑dimensional grid‑based models.
Key Contributions
- Hybrid architecture that learns a diffeomorphic warp of the physical domain with a deep net, then solves for spectral coefficients analytically on the warped (latent) manifold.
- Differentiable linear solver replaces the usual output‑layer gradient descent, yielding closed‑form optimal coefficients during the forward pass.
- Spectral‑analytic basis built from generalized Vekua (analytic) functions, enabling high‑frequency detail capture without the usual bias.
- Massive parameter reduction (e.g., 840 params vs. >4 M for a 3‑D grid) while preserving or improving accuracy.
- Empirical validation on five demanding physics benchmarks, including Helmholtz wave propagation, sparse medical imaging, and 3‑D unsteady Navier‑Stokes turbulence.
- Open‑source implementation released under a permissive license (GitHub:
VladimerKhasia/vecua).
Methodology
- Domain Warping – A conventional multilayer perceptron (MLP) learns a smooth, invertible mapping ( \phi: \Omega \rightarrow \tilde{\Omega} ). This “warps” the original physical domain into a latent space where the solution behaves more like a low‑frequency, analytically tractable field.
- Spectral Representation – In the warped space, the target field (u(\tilde{x})) is expressed as a linear combination of generalized analytic (Vekua) functions ({ \psi_k(\tilde{x})}). These functions form a complete basis for a wide class of PDE solutions, especially those with oscillatory behavior.
- Differentiable Solver Layer – Instead of learning the coefficients ({c_k}) via back‑propagation, AVC assembles a linear system derived from the governing PDE (e.g., Helmholtz, Navier‑Stokes) and solves it analytically (e.g., via LU decomposition). The solver is wrapped in an autograd‑compatible operation, so gradients flow back to the warping network.
- Training Loop – The loss is typically a physics‑informed residual (e.g., PDE residual, boundary condition mismatch) plus optional data terms. Because the coefficients are optimal at each forward pass, the optimizer only needs to adjust the warping net, dramatically speeding convergence.
The whole pipeline is end‑to‑end differentiable, yet the heavy lifting of spectral coefficient estimation is performed analytically, sidestepping the “spectral bias” that plagues pure MLPs.
Results & Findings
| Benchmark | Metric (error) | Params | Speedup vs. Implicit NN |
|---|---|---|---|
| 3‑D Helmholtz (high‑freq) | 1.2 × 10⁻⁴ (L₂) | 840 | 2.8× |
| Sparse CT reconstruction | 0.018 dB PSNR loss | 1.1 k | 2.5× |
| 3‑D Navier‑Stokes turbulence (Re=10⁴) | 3.4 × 10⁻³ (vorticity) | 1.2 k | 3.1× |
| 2‑D Poisson with discontinuities | 9.7 × 10⁻⁵ | 720 | 2.2× |
| Time‑dependent wave equation | 2.1 × 10⁻⁴ (temporal L₂) | 950 | 2.9× |
Key takeaways
- Accuracy: AVC matches or surpasses state‑of‑the‑art implicit neural representations (INRs) even on notoriously difficult high‑frequency problems.
- Parameter Efficiency: Orders‑of‑magnitude fewer trainable weights, making the model lightweight enough for edge devices or rapid prototyping.
- Convergence: Because the spectral coefficients are solved exactly each iteration, the optimizer converges 2–3× faster than standard INR training.
Practical Implications
- Real‑time Simulation: The low‑parameter footprint and fast convergence enable on‑the‑fly surrogate models for CFD, acoustics, or electromagnetics, useful in interactive design tools or digital twins.
- Edge‑Device Inference: With sub‑kilobyte models, AVC can be deployed on microcontrollers for sensor‑fusion tasks (e.g., medical imaging reconstruction directly on portable scanners).
- Hybrid PDE Solvers: Engineers can embed AVC as a plug‑in within existing finite‑element or finite‑difference pipelines to accelerate the solution of high‑frequency sub‑problems without sacrificing fidelity.
- Reduced Training Costs: Since only the warping network is learned, training requires far fewer GPU hours and memory, lowering the barrier for small teams to experiment with physics‑informed ML.
Limitations & Future Work
- Basis Selection: The current Vekua basis is handcrafted for certain PDE classes; extending it to arbitrary operators may need problem‑specific derivations.
- Scalability of Solver: The differentiable linear solver scales cubically with the number of basis functions; for extremely high‑dimensional latent spaces, iterative or preconditioned solvers might be required.
- Generalization to Noisy Data: While the method excels on clean physics‑based losses, robustness to noisy, real‑world measurements (e.g., sensor drift) remains to be thoroughly evaluated.
- Adaptive Basis Enrichment: Future work could explore dynamic addition/removal of basis functions during training to balance expressivity and computational load.
Overall, the Adaptive Vekua Cascade opens a promising pathway toward memory‑efficient, spectrally accurate scientific machine learning—bridging the gap between deep learning flexibility and the rigor of classical analytical solvers.
Authors
- Vladimer Khasia
Paper Information
- arXiv ID: 2512.11776v1
- Categories: cs.LG
- Published: December 12, 2025
- PDF: Download PDF