[Paper] Requirements Debt in AI-Enabled Perception Systems Development: An Industrial RE4AI Perspective

Published: (April 30, 2026 at 09:11 AM EDT)
5 min read
Source: arXiv

Source: arXiv - 2604.27825v1

Overview

The paper investigates Requirements Debt (ReD)—the hidden cost that builds up when the requirements of AI‑enabled perception systems in cars are not kept up‑to‑date, documented, or verified. By linking technical‑debt theory with requirements engineering for AI (RE4AI), the authors reveal how rapidly evolving functional and non‑functional demands can erode safety, auditability, and certification readiness in modern automotive perception stacks.

Key Contributions

  • Conceptualisation of Requirements Debt for AI perception – defines ReD as a distinct subtype of technical debt specific to AI‑driven automotive systems.
  • Empirical grounding – 16 semi‑structured interviews across 13 automotive firms and 3 research institutes, analyzed via thematic analysis.
  • Identification of ReD mechanisms – maps how evolving functional requirements (e.g., algorithm updates, sensor‑fusion changes) and non‑functional requirements (e.g., safety, cybersecurity, transparency) generate concrete debt patterns such as semantic drift, validation backlogs, assurance lag, and compliance misalignment.
  • Propagation model – shows how ReD spreads across data, model, and system artefacts, affecting audit trails, reliability, and certification readiness.
  • Practical checklist – provides a set of observable “symptoms” (e.g., growing test‑suite gaps, undocumented model version changes) that practitioners can use to detect early signs of ReD.

Methodology

  1. Interview design – 16 semi‑structured interviews with senior engineers, safety analysts, and AI researchers from a mix of OEMs, Tier‑1 suppliers, and academic labs.
  2. Sampling – purposive sampling ensured coverage of different vehicle platforms, perception modalities (camera, lidar, radar), and maturity levels of AI integration.
  3. Thematic analysis – transcripts were coded inductively to surface recurring patterns, then grouped into higher‑level themes representing functional and non‑functional requirement evolution.
  4. Triangulation – findings were cross‑checked with internal documentation (e.g., change‑request logs, safety case updates) provided by participants to validate the debt mechanisms.

Results & Findings

AreaMain ReD MechanismConcrete Effect
Functional requirementsAlgorithm updates & sensor‑fusion changesSemantic drift – the meaning of a requirement (e.g., “detect pedestrians within 30 m”) diverges from the model’s actual behavior, leading to hidden bugs.
Real‑time constraints & architectural tweaksValidation backlog – verification activities cannot keep pace, creating a queue of untested changes.
Rapid iteration cyclesIntegration debt – mismatched interfaces between perception modules and downstream ADAS functions.
Non‑functional requirementsSafety & cybersecurity standards evolutionAssurance lag – safety cases become outdated, forcing costly re‑certification later.
Transparency & trustworthiness expectationsTransparency debt – insufficient documentation of model provenance, making audits and explainability analyses difficult.
Scalability & reliability pressuresReliability debt – performance regressions in edge cases remain undiscovered until field failures occur.

The study shows these mechanisms are interdependent: a lag in functional validation often amplifies non‑functional assurance gaps, and vice‑versa. Over time, the accumulated debt compromises the system’s auditability and readiness for safety certification (e.g., ISO 26262, UNECE R155).

Practical Implications

  • Continuous Requirements Traceability – Adopt tooling that automatically links requirement IDs to data sets, model versions, and test cases. This reduces semantic drift and makes audit trails explicit.
  • Debt‑aware Sprint Planning – Treat “requirements debt tickets” like technical‑debt stories; allocate dedicated capacity each sprint to close validation backlogs and update safety cases.
  • Automated Compliance Checks – Integrate standards‑checking scripts (e.g., for ISO 26262 safety goals) into CI pipelines so that any change that violates a non‑functional requirement raises an immediate flag.
  • Model‑Centric Documentation – Store model provenance (training data provenance, hyper‑parameters, evaluation metrics) in a version‑controlled registry that can be queried during certification audits.
  • Risk‑Based Testing Prioritization – Use the identified ReD symptoms to prioritize test‑case generation for high‑impact perception scenarios (e.g., night‑time pedestrian detection).
  • Organizational Alignment – Encourage cross‑team “requirements debt reviews” where safety engineers, data scientists, and system architects jointly assess the health of the requirement‑artifact traceability graph.

Limitations & Future Work

  • Sample bias – The interview pool, while diverse, is skewed toward large European OEMs and research institutes; startups or non‑European contexts may exhibit different debt patterns.
  • Self‑reported data – Findings rely on participants’ recollection and willingness to disclose internal challenges, which could under‑represent the most severe debt cases.
  • Tooling validation – The paper proposes a conceptual debt‑tracking checklist but does not evaluate concrete tooling implementations in an industrial setting.
  • Future directions – The authors suggest building automated traceability pipelines, quantifying debt impact on certification timelines, and extending the study to other AI domains (e.g., predictive maintenance, driver monitoring).

Bottom line for developers: As AI becomes a core component of automotive perception, the “requirements” you once wrote once and left alone now behave like living code. Ignoring the debt that accrues when those requirements evolve can stall certification, inflate maintenance costs, and jeopardize safety. Embedding continuous traceability, debt‑aware planning, and automated compliance into your development workflow is no longer optional—it’s a prerequisite for delivering trustworthy, road‑ready AI systems.

Authors

  • Hina Saeeda
  • Soniya Abraham

Paper Information

  • arXiv ID: 2604.27825v1
  • Categories: cs.SE
  • Published: April 30, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »