[Paper] Physically-Based Simulation of Automotive LiDAR
Source: arXiv - 2512.05932v1
Overview
This paper introduces a physics‑based, analytically driven simulator for automotive time‑of‑flight (ToF) LiDAR sensors. By modeling beam optics, detector response, and ambient illumination in the near‑infrared spectrum, the authors provide a tool that can generate realistic LiDAR point clouds without the need for costly hardware‑in‑the‑loop testing.
Key Contributions
- Analytic LiDAR model that captures blooming, echo pulse width, and ambient‑light interference.
- Systematic parameter extraction workflow using high‑resolution goniometer measurements of real sensors.
- Integration with physically‑based rendering (PBR) pipelines, allowing both rasterized shading and ray‑traced scenes.
- Support for arbitrary beam steering patterns and non‑zero beam diameters, enabling simulation of a wide range of commercial LiDARs.
- Validation on two distinct automotive units (Valeo Scala Gen. 2 and Blickfeld Cube 1), demonstrating the model’s adaptability.
Methodology
-
Physical Model Definition
- Treat the LiDAR as a single‑bounce ToF system: emitted NIR pulses travel to a surface, reflect, and return to a photodiode array.
- Model the emitted beam as a Gaussian‑like intensity distribution with a configurable spread and steering pattern.
- Include detector characteristics (sensitivity map, aperture size) and convert received optical power into an echo pulse width using a calibrated linear relationship.
-
Ambient Light Handling
- Add a stray‑light term that represents uncorrelated illumination (e.g., sunlight).
- This term is combined with the reflected signal before the pulse‑width conversion, reproducing blooming and range‑bias effects seen in real data.
-
Parameter Calibration
- Use a goniometer to measure photometric luminance of the sensor’s beam on calibrated target materials at 0.01° angular steps.
- Fit the analytic beam model and detector sensitivity to these measurements, extracting:
- Beam spread & steering pattern
- Emitted power
- Detector gain & noise floor
- Pulse‑width conversion factor
-
Rendering Integration
- Render the scene in the NIR band using either rasterization (fast, suitable for large environments) or ray tracing (high fidelity, captures specular/retro‑reflective effects).
- The rendered radiance map is sampled by the analytic LiDAR model to produce per‑pixel range and intensity values, which are then assembled into a point cloud.
-
Evaluation
- Simulated point clouds are compared against real measurements from the two target LiDARs across varied lighting conditions and surface materials.
Results & Findings
- Parameter extraction succeeded for both sensors despite differing hardware interfaces (one with a proprietary SDK, the other with an open API).
- Simulated point clouds matched real data in terms of range error distribution, intensity histograms, and blooming patterns, with average range bias < 5 cm under sunny conditions.
- The model accurately reproduced retro‑reflection spikes (e.g., from traffic signs) and ambient‑light induced noise, which are critical failure modes for autonomous driving perception stacks.
- Computational cost scales with rendering choice: rasterized pipelines generate 1 M points in ~0.2 s on a consumer GPU, while ray‑traced pipelines take ~1.5 s for the same output on a modern RTX card.
Practical Implications
- Synthetic Dataset Generation – Developers can now produce large, photorealistic LiDAR datasets that include realistic sensor artefacts, reducing reliance on expensive field campaigns.
- Algorithm Validation & Stress‑Testing – Perception pipelines (object detection, SLAM, sensor fusion) can be evaluated under controlled variations of beam pattern, ambient light, and surface reflectivity, exposing edge‑case failures early.
- Hardware‑in‑the‑Loop (HIL) Simulations – The analytic model can be embedded into vehicle simulators (e.g., CARLA, LGSVL) to provide a faithful LiDAR feed without needing the physical unit.
- Design Feedback for OEMs – By tweaking beam spread or detector sensitivity in the simulator, engineers can explore trade‑offs (cost vs. performance) before committing to hardware prototypes.
Limitations & Future Work
- The current model assumes single‑bounce reflections, so multi‑path effects (e.g., inter‑reflections in complex urban canyons) are not captured.
- Real‑time performance is not achieved; the approach is intended for offline dataset creation or HIL where latency is tolerable.
- Calibration requires high‑precision goniometer measurements, which may be impractical for every new sensor variant.
- Future research directions include extending the model to multi‑bounce light transport, integrating machine‑learned beam profiles for faster calibration, and optimizing the pipeline for real‑time GPU execution.
Authors
- L. Dudzik
- M. Roschani
- A. Sielemann
- K. Trampert
- J. Ziehn
- J. Beyerer
- C. Neumann
Paper Information
- arXiv ID: 2512.05932v1
- Categories: cs.RO, cs.CV
- Published: December 5, 2025
- PDF: Download PDF