[Paper] UAVLight: A Benchmark for Illumination-Robust 3D Reconstruction in Unmanned Aerial Vehicle (UAV) Scenes
Source: arXiv - 2511.21565v1
Overview
The paper introduces UAVLight, a new benchmark designed to evaluate how well 3‑D reconstruction pipelines cope with natural lighting changes in UAV (drone) imagery. By capturing the same geo‑referenced flight paths at several times of day, the dataset isolates illumination variation while keeping geometry, camera calibration, and viewpoints constant—an essential step toward building reconstruction systems that work reliably in real‑world outdoor missions.
Key Contributions
- A controlled‑yet‑realistic UAV dataset: 30+ scenes captured repeatedly at sunrise, noon, and late afternoon, providing a wide range of sun angles, cloud conditions, and shadow patterns without altering scene geometry.
- Standardized evaluation protocol: metrics for geometry accuracy, texture consistency, and relighting quality across all lighting conditions, enabling apples‑to‑apples comparisons.
- Baseline analysis: extensive experiments with classic MVS/SfM pipelines (COLMAP, OpenMVS) and recent neural rendering approaches (NeRF‑based) that reveal severe performance drops under illumination shifts.
- Open‑source release: raw images, calibrated flight logs, ground‑truth point clouds (LiDAR), and evaluation scripts are publicly available to foster reproducible research.
Methodology
- Scene selection & flight planning – The authors chose diverse outdoor environments (urban rooftops, farmland, construction sites). For each scene, a pre‑planned, GPS‑driven flight path was executed multiple times on the same day at fixed intervals (e.g., 08:00, 12:00, 16:00).
- Data acquisition – High‑resolution RGB cameras mounted on a DJI Matrice platform captured overlapping images (≈80 % overlap). Flight logs provide exact extrinsics, and a handheld LiDAR scanner supplies a high‑precision reference point cloud.
- Lighting characterization – Ambient illumination was logged with a calibrated photometer, and sky conditions were annotated (clear, partly cloudy, overcast). This metadata allows researchers to correlate reconstruction errors with specific lighting factors.
- Benchmark construction – The dataset is split into training (single‑time‑of‑day) and test (cross‑time) subsets. Evaluation scripts compute:
- Geometric error (Chamfer distance to LiDAR ground truth)
- Photometric consistency (SSIM/LPIPS between textures reconstructed from different times)
- Relighting error (difference between rendered views under a common illumination model).
Results & Findings
- Classic pipelines struggle – COLMAP’s SfM stage shows up to 30 % higher reprojection error when images from different times are mixed, and the resulting dense point clouds exhibit noticeable drift and shadow artifacts.
- Neural rendering is not a silver bullet – NeRF‑style models trained on a single lighting condition fail to generalize; when evaluated on a different time of day, they produce blurry geometry and color bleeding from shadows.
- Lighting‑aware baselines help – Simple photometric normalization (histogram matching) before MVS reduces texture inconsistency by ~15 % but does not fully recover geometry.
- Benchmark sensitivity – The controlled illumination changes produce a clear, monotonic relationship between sun elevation angle and reconstruction error, confirming that UAVLight isolates lighting as a primary failure mode.
Practical Implications
- More reliable drone mapping – Companies that generate orthomosaics or 3‑D city models can use UAVLight to test and select pipelines that remain accurate throughout a full workday, reducing the need for costly re‑flights.
- Improved inspection & monitoring – Infrastructure inspections (bridges, power lines) often occur under sub‑optimal lighting; illumination‑robust methods can deliver consistent defect detection without manual image preprocessing.
- Foundation for relightable assets – With a benchmark that explicitly measures relighting error, developers can build assets that support virtual staging, AR overlays, or simulation under arbitrary sun positions—valuable for construction planning and gaming.
- Guidance for sensor design – The dataset highlights the benefit of integrating HDR cameras or active illumination (e.g., LiDAR) to mitigate shadow‑induced geometry drift, informing hardware choices for next‑gen UAV platforms.
Limitations & Future Work
- Geographic scope – All scenes are captured in a single region (mid‑latitude temperate climate); performance under extreme sun angles (high latitudes) or harsh weather (rain, fog) remains untested.
- Static scene assumption – While geometry is held constant, moving objects (vehicles, people) are present and can confound evaluation; future releases could include fully static scenes for pure lighting analysis.
- Baseline depth – The paper provides only straightforward normalization baselines; more sophisticated illumination‑invariant features or learned lighting models are left for subsequent research.
- Extension to multimodal data – Incorporating thermal or multispectral imagery could further enrich the benchmark and enable cross‑modal reconstruction strategies.
UAVLight opens the door for the next wave of illumination‑aware 3‑D reconstruction tools, turning a long‑standing “sun‑problem” into a tractable research challenge.
Authors
- Kang Du
- Xue Liao
- Junpeng Xia
- Chaozheng Guo
- Yi Gu
- Yirui Guan
- Duotun Wang
- ShengHuang
- Zeyu Wang
Paper Information
- arXiv ID: 2511.21565v1
- Categories: cs.CV
- Published: November 26, 2025
- PDF: Download PDF