[Paper] LiDAS: Lighting-driven Dynamic Active Sensing for Nighttime Perception
Source: arXiv - 2512.08912v1
Overview
Nighttime driving is a tough problem for vision‑based systems because they have to work with whatever ambient light is available. The paper “LiDAS: Lighting‑driven Dynamic Active Sensing for Nighttime Perception” proposes a clever workaround: use the car’s existing headlights as a controllable illumination device that adapts in real time to what the perception model needs. By shaping the light field to highlight objects and dim empty space, LiDAS lets daytime‑trained detectors work much better at night—without any extra training data.
Key Contributions
- Closed‑loop active illumination: Introduces a feedback loop where a perception model predicts where to shine light, and the headlights execute that plan on the fly.
- Optimization of illumination field: Formulates illumination as a continuous field that maximizes downstream detection/segmentation metrics, rather than a naïve uniform brightening.
- Zero‑shot nighttime generalization: Shows that models trained only on daytime data can achieve high performance at night simply by adapting the lighting, eliminating costly night‑time data collection.
- Synthetic‑to‑real transfer: Trains the illumination policy on rendered night scenes and deploys it directly on real vehicles, demonstrating robust sim‑to‑real transfer.
- Energy efficiency: Achieves the same or better perception performance while cutting headlight power consumption by ~40 %.
- Compatibility with domain‑generalization: LiDAS can be stacked on top of existing domain‑adaptation techniques for additional robustness, without retraining the perception backbone.
Methodology
- Perception Backbone – Any off‑the‑shelf object detector or semantic‑segmentation network (e.g., YOLO, Mask‑RCNN) runs on the current camera frame and outputs a confidence map of where objects are likely to be.
- Illumination Planner – A lightweight neural network (or differentiable optimizer) takes the confidence map and predicts a spatial illumination intensity map (I(x, y)). The planner is trained to maximize a proxy of the downstream loss (e.g., negative mAP) while respecting a global power budget.
- Closed‑Loop Execution – The predicted illumination map is rasterized onto the vehicle’s high‑definition LED headlights, which can modulate intensity per pixel or per small zone. The camera captures the newly lit scene, feeding the next perception step. This loop runs at ~10 Hz in the experiments.
- Training on Synthetic Data – Using a physics‑based night‑time rendering pipeline, the authors generate paired data: a clean daytime image, a corresponding night‑time image, and the optimal illumination field (computed via gradient‑based search). The planner learns to approximate this optimal field.
- Deployment – No fine‑tuning is required on real night footage; the planner is applied directly, forming a zero‑shot system.
Results & Findings
| Metric | Standard Low‑Beam | LiDAS (same power) | LiDAS (40 % less power) |
|---|---|---|---|
| mAP@50 (object detection) | 42.1 % | 60.8 % (+18.7 pp) | 58.9 % |
| mIoU (semantic segmentation) | 48.3 % | 53.3 % (+5.0 pp) | 52.5 % |
- Performance boost: Even with identical power, LiDAS lifts detection average precision by nearly 19 % and segmentation IoU by 5 %.
- Energy savings: Cutting headlight power by 40 % still outperforms the baseline low‑beam, showing the illumination is being used far more efficiently.
- Complementarity: When combined with a state‑of‑the‑art domain‑generalization method (e.g., style‑transfer augmentation), LiDAS adds an extra ~3 % mAP, confirming it works orthogonally to model‑centric tricks.
- Real‑world closed‑loop driving: In on‑road tests (≈2 km of night driving), the system maintained stable perception scores without noticeable flicker or latency.
Practical Implications
- Cost‑effective night vision: Automakers can retrofit existing LED headlight arrays with a modest controller and software stack, avoiding expensive infrared sensors or dedicated night‑time cameras.
- Zero‑shot deployment: Fleet operators can roll out a new perception model trained only on daytime data and instantly gain night capability, reducing data‑collection and annotation costs.
- Energy‑aware autonomous driving: For electric vehicles, saving 40 % of headlight power translates directly into longer range or more budget for other sensors.
- Developer‑friendly API: The illumination planner can be exposed as a simple “setIlluminationMap” call, making integration into ROS, Apollo, or proprietary stacks straightforward.
- Safety and regulation: Because the system respects a global power budget and can be tuned to comply with road‑lighting standards, it offers a path to regulatory approval without hardware changes.
Limitations & Future Work
- Hardware granularity: The current prototype assumes high‑resolution, per‑pixel LED control, which is not yet standard on most production cars. Scaling down to coarser zones may reduce gains.
- Latency constraints: The closed‑loop runs at ~10 Hz; faster perception pipelines (e.g., 30 Hz) would need more optimized planners or dedicated ASICs.
- Weather robustness: Heavy rain or fog can scatter directed light, diminishing the benefit; integrating adaptive exposure or additional sensors (e.g., lidar) is an open avenue.
- Generalization beyond driving: Applying LiDAS to other domains (e.g., nighttime robotics, surveillance) will require re‑training the planner for different camera‑headlight geometries.
Bottom line: LiDAS demonstrates that “smart headlights” can be a game‑changer for nighttime perception, turning a ubiquitous vehicle component into an active vision actuator that boosts safety while saving energy. For developers, the takeaway is simple—by closing the loop between perception and illumination, you can get more mileage out of existing models without the heavy cost of night‑time data collection.
Authors
- Simon de Moreau
- Andrei Bursuc
- Hafid El‑Idrissi
- Fabien Moutarde
Paper Information
- arXiv ID: 2512.08912v1
- Categories: cs.CV, cs.RO
- Published: December 9, 2025
- PDF: Download PDF