[Paper] Timeliness-Oriented Scheduling and Resource Allocation in Multi-Region Collaborative Perception

Published: (January 7, 2026 at 10:16 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.04542v1

Overview

Collaborative perception (CP) lets multiple sensors—think cameras, LiDARs, or edge devices—share what they see to overcome blind spots and range limits, a capability crucial for autonomous vehicles and smart‑city infrastructure. This paper tackles two practical hurdles: (1) the timeliness of shared data, because stale information quickly loses value, and (2) the tight constraints on compute power and wireless bandwidth that limit how much raw sensor data can be transmitted. The authors propose a scheduling and resource‑allocation framework that explicitly balances perception accuracy against these resource limits.

Key Contributions

  • Timeliness‑aware penalty model: Introduces an empirical function that maps the combined effect of Age of Information (AoI) and communication volume to perception performance, quantifying how “old” data degrades utility.
  • TAMP algorithm: A Lyapunov‑driven, per‑slot scheduling policy (Timeliness‑Aware Multi‑region Prioritized) that prioritizes transmissions across multiple geographic regions while respecting bandwidth and compute budgets.
  • Long‑term optimization: Formulates the problem as a long‑term average penalty minimization, enabling the scheduler to consider cumulative effects of current decisions on future system states.
  • Real‑world validation: Implements and tests TAMP on the Roadside Cooperative perception (RCooper) dataset for both intersection and corridor traffic scenarios.
  • Performance gains: Demonstrates up to 27 % improvement in Average Precision (AP) over the strongest baseline across a variety of network and compute configurations.

Methodology

  1. System model – The authors model a set of roadside units (RSUs) and on‑vehicle sensors that periodically generate perception features (e.g., compressed point‑cloud descriptors). Each transmission incurs a communication volume (bits) and experiences a delay, giving rise to an AoI for that piece of information.
  2. Penalty function – An empirical function P(AoI, volume) is fitted from simulation data to capture how perception accuracy drops as data ages or is overly compressed. The function is monotonic in both arguments, reflecting the intuition that fresher, richer data are more valuable.
  3. Lyapunov optimization – The long‑term average penalty is transformed into a drift‑plus‑penalty expression. By introducing virtual queues for bandwidth and compute constraints, the problem decomposes into a per‑slot prioritization: each region receives a “worth” score that balances the marginal reduction in penalty against the resource cost of sending its data.
  4. TAMP scheduling – At every time slot, the algorithm sorts regions by this worth score and allocates resources until the bandwidth/compute budget is exhausted. The policy is provably queue‑stable, meaning it respects the long‑term constraints.
  5. Evaluation – Experiments use the RCooper dataset, which contains synchronized LiDAR and camera streams from multiple RSUs at real intersections and corridors. The authors compare TAMP against static allocation, AoI‑only scheduling, and a recent reinforcement‑learning baseline.

Results & Findings

ScenarioBaseline (best)TAMP AP ↑Bandwidth usage (Mbps)Avg. AoI (ms)
Intersection (dense traffic)Static‑AoI+27 %12.3 (≈ same)45 → 30
Corridor (highway)RL‑scheduler+19 %9.8 (≈ same)38 → 26
Varying bandwidth (5–15 Mbps)All baselines+10‑27 % across the range≤ budgetAoI reduced by 15‑30 %
  • Accuracy boost: The AP gains stem from delivering fresher, higher‑fidelity features precisely where they matter most (e.g., approaching intersections).
  • Resource efficiency: TAMP respects the same bandwidth budget as baselines; the improvement comes from smarter which region’s data to send, not from sending more data overall.
  • Robustness: Performance holds across different traffic densities and network conditions, indicating the algorithm adapts well to dynamic environments.

Practical Implications

  • Edge‑AI pipelines: Developers building V2X (vehicle‑to‑everything) stacks can integrate TAMP as a lightweight scheduler that runs on RSUs or on‑vehicle gateways, requiring only per‑slot worth calculations (no heavy RL training).
  • Network planning: City planners can use the penalty model to estimate the minimum bandwidth needed to achieve a target perception quality, aiding 5G/6G rollout decisions.
  • Safety‑critical systems: By guaranteeing fresher perception data where it matters most (e.g., crossing pedestrians), TAMP can directly improve collision‑avoidance algorithms without extra sensor hardware.
  • Scalable CP frameworks: The multi‑region formulation fits naturally into existing cooperative perception standards (e.g., ETSI C‑ITS), allowing incremental deployment across heterogeneous sensor fleets.

Limitations & Future Work

  • Empirical penalty model: The function P(AoI, volume) is fitted on a specific dataset; its transferability to other sensor modalities (radar, thermal) or different urban layouts needs further validation.
  • Assumed perfect scheduling granularity: The current implementation assumes slot‑level decisions; real‑world MAC layers (e.g., LTE‑V, C‑V2X) may impose coarser timing constraints.
  • Static compute budget: The study treats on‑device compute capacity as fixed; future work could explore dynamic offloading to edge servers or adaptive compression schemes.
  • Security & privacy: The paper does not address authentication or encryption overhead, which could affect AoI and bandwidth budgets in production deployments.

Overall, the TAMP framework offers a pragmatic, theory‑backed approach for developers who need to squeeze the most perception value out of limited communication resources in collaborative, multi‑region sensing systems.

Authors

  • Mengmeng Zhu
  • Yuxuan Sun
  • Yukuan Jia
  • Wei Chen
  • Bo Ai
  • Sheng Zhou

Paper Information

  • arXiv ID: 2601.04542v1
  • Categories: cs.LG, cs.DC
  • Published: January 8, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »