[Paper] Resume-ing Control: (Mis)Perceptions of Agency Around GenAI Use in Recruiting Workflows

Published: (April 29, 2026 at 12:17 PM EDT)
4 min read
Source: arXiv

Source: arXiv - 2604.26851v1

Overview

The paper Resume‑ing Control dives into how recruiting professionals experience agency when they incorporate generative AI (genAI) tools into hiring pipelines. By interviewing 22 recruiters, the authors reveal a hidden tension: while recruiters feel they retain ultimate decision‑making power, genAI quietly reshapes the very data and cues they rely on—raising concerns about deskilling, oversight, and the true value of AI‑driven efficiency.

Key Contributions

  • Empirical insight into recruiters’ perceived control and agency when using genAI in day‑to‑day hiring tasks.
  • Identification of an “invisible architect” role for genAI that influences job‑description creation, candidate summarization, and interview‑performance evaluation.
  • Evidence of adoption pressure from leadership, competitive applicant use of AI, and personal productivity goals, often overriding individual recruiters’ choice.
  • Documentation of marginal efficiency gains juxtaposed with notable recruiter deskilling and reduced oversight capability.
  • Recommendations for responsible, “perceptible” AI integration that preserve human oversight in high‑stakes hiring decisions.

Methodology

The researchers conducted semi‑structured interviews with 22 recruiting professionals from a mix of industries and company sizes. Interview topics covered:

  1. Typical recruiting workflow
  2. Points where genAI tools are introduced
  3. Perceived impact on decision‑making authority
  4. Overall satisfaction with AI‑augmented processes

Transcripts were coded using thematic analysis, allowing patterns around control, adoption pressure, and perceived efficiency to emerge. The qualitative approach keeps the findings grounded in real‑world recruiter experiences rather than abstract metrics.

Results & Findings

FindingWhat It Means
Recruiters claim final authority but rely on AI‑generated job descriptions, candidate summaries, and interview scorecards.AI is shaping the inputs to human decisions, subtly steering outcomes without recruiters realizing it.
Adoption is often top‑down (executive mandates, fear of being out‑competed by AI‑savvy applicants).Individual recruiters have limited say in whether to use genAI, reducing perceived autonomy.
Efficiency gains are modest (≈10‑15 % time saved on routine tasks).The promised productivity boost does not offset the hidden cost of skill erosion.
Deskilling observed: recruiters feel less confident evaluating raw resumes or conducting unbiased interviews.Over‑reliance on AI may erode critical hiring expertise, jeopardizing future oversight.
Perceived risk of “black‑box” influence: AI suggestions are taken as facts, limiting critical questioning.Transparency and explainability become essential to maintain trustworthy hiring decisions.

Practical Implications

  • Tool Designers: Build transparent genAI interfaces that surface provenance (e.g., “This job description was auto‑generated from X keywords”) and allow easy editing, so recruiters stay aware of AI’s contribution.
  • Product Managers: Prioritize features that augment rather than replace recruiter judgment—e.g., suggestion panels, confidence scores, and audit logs.
  • HR Tech Vendors: Offer training modules that teach recruiters how to critically evaluate AI outputs, preserving their expertise and preventing deskilling.
  • Engineering Teams: Implement human‑in‑the‑loop checkpoints where AI‑generated content must be approved or annotated before it reaches downstream stages.
  • Organizational Leaders: Recognize that mandating AI adoption without addressing recruiter agency can backfire; align AI rollout with clear governance policies and measurable ROI beyond superficial time savings.

Limitations & Future Work

  • Sample Size & Diversity: The study focuses on 22 recruiters, primarily from North American firms; broader cross‑regional studies could uncover cultural or regulatory variations.
  • Tool Specificity: Interviews did not differentiate between distinct genAI products (e.g., large‑language‑model chatbots vs. specialized résumé parsers), limiting granularity of insights.
  • Longitudinal Impact: The research captures a snapshot in time; future work should track how recruiter skill levels and decision quality evolve with prolonged AI exposure.
  • Quantitative Validation: Pairing qualitative findings with performance metrics (e.g., hiring quality, turnover rates) would strengthen claims about efficiency vs. deskilling trade‑offs.

Resume‑ing Control shines a light on the subtle ways genAI can re‑architect hiring workflows while leaving recruiters feeling both in charge and constrained. For developers, product teams, and tech leaders building the next generation of AI‑powered HR tools, the takeaway is clear: design for perceptibility, human agency, and transparent oversight—or risk turning powerful AI assistants into invisible decision‑makers that erode the very expertise they were meant to amplify.

Authors

  • Sajel Surati
  • Rosanna Bellini
  • Emily Black

Paper Information

  • arXiv ID: 2604.26851v1
  • Categories: cs.CY, cs.AI
  • Published: April 29, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »