[Paper] An Empirical Evaluation of Code Smell Detection in Angular Applications

Published: (April 30, 2026 at 10:08 AM EDT)
4 min read
Source: arXiv

Source: arXiv - 2604.27893v1

Overview

The paper presents the first systematic catalog of Angular‑specific code smells gathered from community‑driven (“grey”) literature, and it validates an open‑source static analysis tool that can automatically spot these smells in real‑world projects. By showing that the detector reaches > 0.88 accuracy and F1‑scores up to 1.00, the authors demonstrate that reliable, framework‑aware quality checks are now feasible for large‑scale Angular applications.

Key Contributions

  • Catalog of 11 Angular code smells derived from a comprehensive grey‑literature review (forums, blogs, GitHub issues, conference talks).
  • Cross‑framework analysis showing that 6 of the smells also appear in React, highlighting universal front‑end design pitfalls.
  • Static analysis prototype (an ESLint‑compatible plugin) that automatically detects the cataloged smells.
  • Empirical evaluation on a manually validated dataset (≈ 200 open‑source Angular repos) with precision/recall > 0.88 for every smell.
  • Open‑source release of the smell definitions and detection rules, enabling immediate adoption and further research.

Methodology

  1. Grey‑literature mining – The authors searched Stack Overflow, Reddit, Angular community blogs, and conference talks for practitioner‑reported “bad practices”. Each candidate was screened, grouped, and refined into a concrete smell definition.
  2. Smell taxonomy – The 11 smells were classified by their technical nature (e.g., component‑level, template‑level, module‑level).
  3. Tool implementation – Using the Angular Language Service and ESLint AST APIs, the team encoded each smell as a rule that inspects component classes, templates, and module metadata.
  4. Dataset construction – 200+ Angular repositories (spanning versions 9–15) were sampled, and 1,200 code fragments were manually labeled as “smelly” or “clean”.
  5. Evaluation metrics – Standard IR measures (precision, recall, F1‑score, accuracy) were computed per smell and aggregated across the whole suite.

Results & Findings

MetricRange
Accuracy0.88 – 0.96
Precision0.89 – 1.00
Recall0.88 – 1.00
F1‑Score0.89 – 1.00
  • High detection reliability across all smells, with the toughest smell (“Component Overloading”) still achieving an F1 of 0.89.
  • Most prevalent issues in the wild:
    • Component Overloading – a single component handling too many responsibilities.
    • Duplicated Logic – identical code snippets spread across multiple components/services.
    • Inefficient Template Bindings – bindings that trigger unnecessary change‑detection cycles.
  • Cross‑framework overlap confirms that many front‑end quality problems are framework‑agnostic, but Angular also exhibits unique patterns (e.g., heavy reliance on NgModules).

Practical Implications

  • Integrate into CI pipelines – The ESLint‑based detector can be added to GitHub Actions, GitLab CI, or Azure DevOps to catch smells before they ship.
  • Refactoring guidance – Each rule includes a short “why it matters” note and a suggested remediation, turning raw warnings into actionable tickets.
  • Performance gains – Fixing “Inefficient Template Bindings” often reduces Angular’s change‑detection workload, leading to measurable UI latency improvements.
  • Team onboarding – New developers can use the smell catalog as a style guide, accelerating the learning curve for large Angular codebases.
  • Tool ecosystem – Because the detector is built on ESLint, it can coexist with existing lint rules (e.g., @angular-eslint), providing a unified linting experience.

Limitations & Future Work

  • Dataset bias – The evaluation set consists mainly of open‑source projects; enterprise codebases with stricter architectural constraints may exhibit different smell distributions.
  • Version coverage – While the tool supports Angular 9‑15, upcoming Ivy‑only features (e.g., standalone components) could introduce new smell categories not captured here.
  • Dynamic analysis missing – Some smells (e.g., runtime performance bottlenecks) may only be observable at execution time; combining static and dynamic analysis is a promising direction.
  • User study needed – The paper does not assess developer acceptance or the impact of automated fixes on productivity; future work could involve controlled experiments with development teams.

If you maintain an Angular front‑end, consider adding the authors’ ESLint plugin to your linting stack today. Not only will it surface hidden design debt, but it also gives you a concrete, data‑backed roadmap for incremental refactoring.

Authors

  • Maykon Nunes
  • Emanuel Coutinho
  • Carla Bezerra
  • Ivan Machado

Paper Information

  • arXiv ID: 2604.27893v1
  • Categories: cs.SE
  • Published: April 30, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »