[Paper] Radiance Meshes for Volumetric Reconstruction

Published: (December 3, 2025 at 01:57 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.04076v1

Overview

The paper presents radiance meshes, a new way to store and render volumetric radiance fields using Delaunay‑tetrahedral meshes. By leveraging hardware‑friendly triangles, the authors achieve exact, real‑time volume rendering on consumer GPUs, opening the door to interactive view synthesis and downstream 3‑D applications.

Key Contributions

  • Radiance Mesh Representation – Encodes radiance and density in constant‑density tetrahedral cells generated by a Delaunay tetrahedralization.
  • Hardware‑Accelerated Rendering – Introduces a rasterization pipeline (and a ray‑tracing fallback) that evaluates the volume‑rendering integral exactly, outperforming existing NeRF‑style representations at comparable primitive counts.
  • Topology‑Robust Optimization – Uses a Zip‑NeRF‑style backbone to keep the field smooth even when vertex moves cause edge flips in the Delaunay mesh.
  • Real‑Time View Synthesis – Demonstrates interactive novel‑view rendering on standard consumer GPUs (desktop and mobile) without sacrificing visual fidelity.
  • Versatile Downstream Uses – Shows that the tetrahedral structure naturally supports fisheye distortion correction, physics‑based simulation, mesh editing, and direct mesh extraction.

Methodology

  1. Mesh Construction – Starting from a sparse set of 3‑D points (e.g., from multi‑view SfM), the authors compute a Delaunay tetrahedralization, yielding a set of non‑overlapping tetrahedra whose vertices lie on the point cloud.
  2. Field Parameterization – Each tetrahedron stores a constant density value and a radiance function defined at its four vertices. Radiance inside the cell is interpolated linearly (barycentric interpolation).
  3. Learning the Parameters – A neural network (the Zip‑NeRF backbone) predicts density and radiance at each vertex. During training, vertex positions are also optimized; when a move triggers an edge flip, the network’s continuous formulation ensures the field does not exhibit sudden jumps.
  4. Exact Volume Rendering
    • Rasterization Path – The tetrahedra are projected as triangles onto the screen. For each pixel, the GPU traverses the intersected tetrahedra in front‑to‑back order, accumulating transmittance and emitted radiance using the classic volume‑rendering equation. Because density is constant per cell, the integral reduces to a closed‑form exponential, enabling a single pass per cell.
    • Ray‑Tracing Path – For platforms that favor ray tracing (e.g., RTX GPUs), the same tetrahedral data structure is traversed with a BVH, again using the exact exponential formulation.
  5. Training Objective – Standard photometric loss (L2 between rendered and ground‑truth images) plus regularizers for smoothness and mesh quality.

Results & Findings

  • Speed – On a RTX 3080, the rasterization pipeline renders 4K novel views at >60 fps, beating voxel‑grid and hash‑grid NeRF baselines by 2–4× at the same memory budget.
  • Quality – PSNR/SSIM scores are on par with state‑of‑the‑art instant‑NeRF and Plenoxels, while preserving fine view‑dependent effects (specularities, translucency).
  • Robustness to Topology Changes – Experiments where vertices are aggressively moved (causing many edge flips) show no degradation in rendered quality, confirming the Zip‑NeRF backbone’s continuity.
  • Application Demonstrations – The authors showcase real‑time fisheye lens distortion correction, interactive fluid‑simulation seeding inside the volume, and clean mesh extraction via iso‑surface marching on the tetrahedral grid.

Practical Implications

  • Interactive Content Creation – Artists can edit the underlying point cloud (add, move, delete points) and instantly see updated view synthesis, making radiance meshes a promising tool for AR/VR asset pipelines.
  • Game & Simulation Engines – Because the representation relies on triangles, existing rasterization pipelines (Unity, Unreal) can ingest radiance meshes with minimal custom shader work, enabling volumetric lighting, fog, or translucent objects that update in real time.
  • Edge‑Device Deployment – The rasterization‑only path runs efficiently on mobile GPUs, opening possibilities for on‑device 3‑D scanning apps that stream live novel‑view video without cloud processing.
  • Scientific Visualization – The exact volume‑rendering integral and constant‑density cells simplify coupling with physics solvers (e.g., CFD), allowing researchers to visualize simulation fields alongside learned radiance data.
  • Data Compression – Storing a scene as a relatively small set of tetrahedra (often < 1 M cells) can be far more compact than dense voxel grids, reducing bandwidth for streaming high‑fidelity 3‑D content.

Limitations & Future Work

  • Resolution Bottleneck – While the method scales well with primitive count, extremely high‑frequency details still require a large number of tiny tetrahedra, which can stress GPU memory and traversal speed.
  • Uniform Density per Cell – Assuming constant density inside each tetrahedron limits the ability to model sharp density gradients without further subdivision.
  • Training Complexity – Joint optimization of vertex positions and neural parameters adds bookkeeping (edge‑flip handling) and can increase training time compared with static‑grid methods.
  • Future Directions – The authors suggest adaptive refinement (splitting tetrahedra where error is high), hybrid representations that combine radiance meshes with learned texture maps, and tighter integration with hardware ray‑tracing APIs to further push real‑time performance.

Authors

  • Alexander Mai
  • Trevor Hedstrom
  • George Kopanas
  • Janne Kontkanen
  • Falko Kuester
  • Jonathan T. Barron

Paper Information

  • arXiv ID: 2512.04076v1
  • Categories: cs.GR, cs.CV
  • Published: December 3, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »