[Paper] AdaRadar: Rate Adaptive Spectral Compression for Radar-based Perception

Published: (March 18, 2026 at 01:42 PM EDT)
4 min read
Source: arXiv

Source: arXiv - 2603.17979v1

Overview

Radar is becoming a cornerstone sensor for autonomous vehicles because it works in rain, fog, and darkness while delivering precise range and velocity measurements. The paper AdaRadar tackles a practical bottleneck: the massive raw radar data streams overwhelm the low‑bandwidth links that feed perception processors (e.g., NPUs). The authors introduce a rate‑adaptive compression framework that continuously tunes how much data to send, preserving detection performance while slashing bandwidth needs by more than two orders of magnitude.

Key Contributions

  • Adaptive compression loop that adjusts the compression ratio on‑the‑fly using a proxy gradient of detection confidence, eliminating the need for a fixed‑rate codec.
  • Zeroth‑order gradient approximation that works with non‑differentiable operations (pruning, quantization) and avoids transmitting large gradient tensors.
  • Frequency‑domain pruning via Discrete Cosine Transform (DCT), exploiting the observation that radar feature maps concentrate energy in a few DCT coefficients.
  • Scaled quantization that preserves the dynamic range of each radar patch, enabling aggressive bit‑reduction without blowing up quantization error.
  • Extensive validation on three public radar perception benchmarks (RADIal, CARRADA, Radatron), demonstrating >100× reduction in feature size with only ~1 % absolute drop in detection accuracy.

Methodology

  1. Pre‑processing & DCT – Raw radar data cubes (range‑Doppler‑angle) are split into small patches. Each patch undergoes a DCT, turning spatial‑frequency information into a set of coefficients.
  2. Selective pruning – The algorithm ranks coefficients by magnitude and discards the least‑significant ones, keeping only a configurable fraction (the compression rate).
  3. Scaled quantization – The remaining coefficients are quantized to a low‑bit representation (e.g., 4‑8 bits) after scaling each patch to retain its original dynamic range.
  4. Adaptive feedback loop – After a forward pass through the downstream detection network, a proxy loss measures how confident the network is about its predictions. Using a zeroth‑order estimator (finite‑difference style), the system computes a gradient w.r.t. the compression rate and updates the rate via simple gradient descent. This step runs on the edge device, so only the compressed data (not the gradients) travel over the bandwidth‑limited link.
  5. Reconstruction & inference – The receiving processor performs an inverse DCT on the compressed coefficients, reconstructs an approximate radar feature map, and feeds it to the perception model.

Results & Findings

DatasetBaseline mAPAdaRadar mAPCompression RatioBandwidth Savings
RADIal78.3 %77.4 %1 : 100>99 % reduction
CARRADA71.5 %70.8 %1 : 95>98 % reduction
Radatron83.1 %82.2 %1 : 110>99 % reduction
  • Performance impact is limited to ~1 percentage point across datasets, confirming that most useful information lives in the high‑energy DCT components.
  • Latency remains within real‑time constraints because the DCT/pruning/quantization steps are lightweight and can be executed on modest embedded CPUs.
  • Stability: The adaptive loop converges within a few iterations, automatically tightening or loosening compression when scene complexity changes (e.g., crowded urban vs. open highway).

Practical Implications

  • Edge‑to‑cloud radar pipelines can now operate over CAN‑bus or low‑speed Ethernet links without sacrificing detection quality, reducing hardware cost and power consumption.
  • Dynamic bandwidth management: Vehicles can allocate radar bandwidth adaptively based on network congestion or sensor fusion priorities (e.g., boost radar when camera visibility degrades).
  • Simplified system integration: Since AdaRadar works as a pre‑processing codec, existing perception stacks (YOLO‑Radar, PointPillars‑Radar, etc.) can be plugged in without retraining.
  • Potential for multi‑sensor compression: The same adaptive, frequency‑domain pruning idea could be extended to lidar or FMCW‑based imaging, offering a unified compression layer for heterogeneous sensor suites.

Limitations & Future Work

  • The current approach assumes a fixed detection model; if the downstream network changes, the proxy gradient may need re‑tuning.
  • Zeroth‑order gradient estimation introduces stochasticity; while sufficient for the reported datasets, more volatile environments could require more robust estimators.
  • The method focuses on feature‑level compression; raw waveform compression (pre‑DCT) remains unexplored.
  • Future research directions include:
    1. Joint training of the compression module and perception network.
    2. Extending the adaptive loop to handle multiple concurrent sensors.
    3. Evaluating performance on ultra‑low‑power microcontrollers for cost‑sensitive ADAS platforms.

Authors

  • Jinho Park
  • Se Young Chun
  • Mingoo Seok

Paper Information

  • arXiv ID: 2603.17979v1
  • Categories: cs.CV
  • Published: March 18, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »

[Paper] Matryoshka Gaussian Splatting

The ability to render scenes at adjustable fidelity from a single model, known as level of detail (LoD), is crucial for practical deployment of 3D Gaussian Spla...