[Paper] An Exploratory Pilot Survey on Technical Quality Control Practices in Agile R&D Projects

Published: (January 10, 2026 at 04:24 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.06689v1

Overview

This pilot survey investigates how agile R&D teams—working under Scrum—actually handle technical quality control. By polling software professionals at science‑and‑technology institutions in Manaus, Brazil, the authors expose the gap between the prescribed quality practices (e.g., automated testing, CI) and their real‑world execution in high‑uncertainty, experimental projects.

Key Contributions

  • Empirical baseline for technical‑quality activities in agile R&D settings, a domain that has received little systematic study.
  • Mixed‑methods data set (quantitative questionnaire + open‑ended comments) that captures both what teams think they do and the nuances behind their answers.
  • Identification of practice inconsistencies: automated testing, code review, and continuous integration are widely known but applied unevenly across sprints.
  • Highlight of metric blind spots: teams rarely monitor technical‑quality metrics (e.g., defect density, test coverage) or assess technical debt from a business/value perspective.
  • Contextual insight into how regional innovation ecosystems (the Manaus STI cluster) shape quality‑control decisions.

Methodology

  1. Target population – software engineers, QA specialists, and R&D managers from 12 science‑and‑technology institutions in Manaus.
  2. Instrument – a structured questionnaire (≈30 items) covering:
    • Adoption of Scrum artifacts and ceremonies.
    • Use of specific quality practices (unit tests, code reviews, CI pipelines).
    • Tracking of technical‑quality metrics and technical‑debt indicators.
    • Perceived challenges and benefits.
  3. Data collection – online survey distributed via institutional mailing lists; 48 complete responses were received (≈15 % response rate).
  4. Analysis – descriptive statistics for closed‑ended items; thematic coding of free‑text answers to surface recurring pain points and rationales.
  5. Validity checks – pilot testing the questionnaire with a small internal group, followed by minor wording adjustments before the main rollout.

Results & Findings

AreaWhat the data showInterpretation
Automated testing78 % acknowledge it, but only 42 % run tests every sprint.Teams value testing but lack consistent integration into the sprint cadence.
Code review71 % claim to perform reviews, yet 35 % do it ad‑hoc rather than as a formal gate.Cultural or time‑pressure factors undermine systematic review adoption.
Continuous Integration (CI)65 % have a CI server, but 28 % trigger builds only before releases.CI is often treated as a “release‑only” tool, missing its potential for early feedback.
Metric monitoring<30 % track defect density or test coverage; <15 % maintain a technical‑debt register.Quantitative quality signals are largely absent, making debt invisible to stakeholders.
Business‑oriented debt assessmentOnly 9 % link technical debt to cost or value impact.Decision‑makers lack a common language to prioritize refactoring versus feature work.
Challenges reportedHigh experimental uncertainty, pressure to deliver prototypes, limited time for non‑functional work.The R&D context pushes functional delivery to the forefront, relegating quality activities to “nice‑to‑have.”

Overall, the survey paints a picture of partial compliance: teams know the “right” practices but struggle to embed them consistently due to the exploratory nature of R&D work.

Practical Implications

  • Tooling pipelines: automate gate steps (e.g., require a passing test suite before a pull request can be merged) to reduce reliance on manual discipline.
  • Metric dashboards: lightweight dashboards (e.g., test‑coverage badge, simple debt‑ratio chart) can make quality signals visible to product owners, aligning technical debt with business priorities.
  • Sprint planning tweaks: allocate a fixed “quality buffer” (e.g., 10 % of story points) each sprint for refactoring and debt repayment; treat it as a first‑class deliverable.
  • Training & culture: emphasize the business cost of poor quality (maintenance effort, rework) in R&D retrospectives to shift perception from “nice‑to‑have” to “must‑have.”
  • Tailored Scrum ceremonies: introduce a brief “quality check” at the end of each daily stand‑up or sprint review to surface technical concerns early.
  • Regional ecosystem support: for innovation clusters similar to Manaus, local funding bodies could incentivize quality‑control maturity (e.g., grant bonuses for documented CI/CD pipelines).

Developers and tech leads can use these findings as a checklist for self‑audit:

  • Do we run automated tests every sprint?
  • Is code review a formal gate?
  • Do we surface a technical‑debt metric to product owners?

Answering “no” signals an opportunity for quick, high‑impact improvement.

Limitations & Future Work

  • Sample size & geography – 48 respondents from a single Brazilian region limit generalizability; practices may differ in larger, more mature tech hubs.
  • Self‑reported data – Responses may suffer from social desirability bias; actual practice adherence could be lower than reported.
  • Cross‑sectional design – The snapshot does not capture how practices evolve over the lifecycle of an R&D project.

Future research could expand the survey to multiple countries, incorporate longitudinal case studies, and experiment with interventions (e.g., introducing automated quality dashboards) to measure causal impact on technical debt and delivery speed.

Authors

  • Mateus Costa Lucena

Paper Information

  • arXiv ID: 2601.06689v1
  • Categories: cs.SE
  • Published: January 10, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »