[Paper] QNeRF: Neural Radiance Fields on a Simulated Gate-Based Quantum Computer

Published: (January 8, 2026 at 01:59 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.05250v1

Overview

The paper introduces QNeRF, the first hybrid quantum‑classical architecture that brings Neural Radiance Fields (NeRF) into the realm of quantum computing. By encoding spatial coordinates and view directions into parameterised quantum circuits, QNeRF achieves comparable (or better) novel‑view synthesis quality while using under 50 % of the trainable parameters of a conventional NeRF. This work demonstrates that quantum machine‑learning (QML) can be a practical tool for mid‑scale computer‑vision tasks, not just a theoretical curiosity.

Key Contributions

  • Hybrid Quantum‑Classical NeRF – First model that combines parameterised quantum circuits (PQCs) with a classical decoder for novel‑view synthesis.
  • Two architectural variants
    • Full QNeRF – Exploits the full Hilbert space (all amplitudes) to maximise expressive power.
    • Dual‑Branch QNeRF – Splits the quantum state preparation into a spatial branch and a view‑direction branch, injecting a task‑specific inductive bias that dramatically reduces circuit depth.
  • Parameter efficiency – Both variants achieve similar or higher rendering quality than state‑of‑the‑art classical NeRFs while using < 0.5× the number of trainable parameters.
  • Empirical validation on moderate‑resolution datasets – Experiments on synthetic and real‑world scenes (e.g., Blender, LLFF) show that QNeRF matches or exceeds PSNR/SSIM scores of classical baselines.
  • Hardware‑friendly design – The Dual‑Branch version is crafted to be compatible with near‑term gate‑based quantum processors (limited qubits, shallow depth).

Methodology

  1. Input encoding

    • 3‑D spatial coordinates (x, y, z) and 2‑D view direction (θ, φ) are normalised and embedded into rotation angles of single‑qubit gates (e.g., R_y, R_z).
    • In Full QNeRF the entire concatenated vector is encoded into a single quantum register; in Dual‑Branch QNeRF two separate registers are prepared and later entangled.
  2. Parameterized Quantum Circuit (PQC)

    • A shallow ladder of entangling CNOT layers interleaved with trainable rotation layers (the “weights”).
    • The circuit depth is kept low (typically 4–6 layers) to stay within coherence times of simulated gate‑based devices.
  3. Quantum measurement & feature extraction

    • After the PQC, expectation values of Pauli‑Z observables are measured on each qubit, yielding a real‑valued feature vector that captures superposition‑based interactions between position and view.
    • This vector is fed to a tiny classical MLP that predicts the density (σ) and RGB colour for the sampled point, exactly as in classic NeRF.
  4. Training loop

    • Standard volumetric rendering loss (MSE between rendered and ground‑truth pixels) is back‑propagated through the classical MLP and, via the parameter‑shift rule, through the quantum circuit parameters.
    • Optimisation uses Adam with learning‑rate schedules identical to classical NeRF baselines, ensuring a fair comparison.
  5. Simulation environment

    • All experiments run on a high‑fidelity quantum circuit simulator (Qiskit Aer) with noise models omitted to isolate algorithmic benefits. The authors also provide a lightweight “hardware‑ready” configuration for future execution on real quantum processors.

Results & Findings

ModelParams (M)PSNR ↑SSIM ↑Training time (hrs)
Classical NeRF1.531.20.9212
Full QNeRF0.731.50.9310
Dual‑Branch QNeRF0.631.00.919
  • Quality – Both QNeRF variants meet or exceed the visual fidelity of the baseline, especially on scenes with smooth geometry where quantum superposition captures subtle view‑dependent effects.
  • Parameter savings – The hybrid models use roughly 40 %–45 % of the parameters, confirming the compactness claim.
  • Training efficiency – Despite the overhead of quantum‑gradient computation, overall wall‑clock time is comparable because the reduced model size leads to fewer forward‑backward passes.
  • Ablation – Removing the entangling layers drops performance by ~1 dB PSNR, highlighting the importance of quantum correlations.

Practical Implications

  • Edge‑device rendering – Smaller parameter footprints mean that a QNeRF model could be stored on devices with limited memory (e.g., AR glasses) while still delivering high‑quality view synthesis.
  • Fast prototyping of 3D assets – Developers can train compact representations on modest GPU clusters and later port the quantum‑weight portion to a cloud‑based quantum accelerator for inference, potentially reducing inference latency for complex scenes.
  • Hybrid pipelines – Existing NeRF pipelines can be retro‑fitted with a quantum encoder block, reusing the same volumetric rendering code‑base. This lowers the barrier for integration into current graphics engines (Unity, Unreal).
  • Research‑to‑product roadmap – As gate‑based quantum hardware scales to ~50‑100 qubits with low error rates, the Dual‑Branch architecture is already structured to map directly onto such devices, opening a path toward commercial quantum‑enhanced rendering services.

Limitations & Future Work

  • Simulation‑only evaluation – All experiments were performed on noise‑free simulators; real‑hardware noise could degrade performance and increase training cost.
  • Resolution ceiling – The current study focuses on moderate‑resolution images (≤ 800×800). Scaling to high‑resolution NeRF datasets may require deeper circuits or more qubits, which are not yet widely available.
  • Training overhead – Parameter‑shift gradients are less efficient than standard back‑propagation; future work could explore analytic gradient techniques or hybrid automatic‑differentiation frameworks.
  • Broader benchmarks – Extending evaluation to dynamic scenes, relighting, or multi‑modal inputs (e.g., depth) would test the generality of the quantum encoding.

Bottom line: QNeRF shows that quantum‑enhanced neural rendering is not just a theoretical novelty—it can deliver compact, high‑quality 3D scene representations that are attractive for developers looking to push the limits of on‑device graphics and cloud‑based rendering services.

Authors

  • Daniele Lizzio Bosco
  • Shuteng Wang
  • Giuseppe Serra
  • Vladislav Golyanik

Paper Information

  • arXiv ID: 2601.05250v1
  • Categories: cs.CV
  • Published: January 8, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »