[Paper] Institutionalizing Best Practices in Research Computing: A Framework and Case Study for Improving User Onboarding

Published: (April 23, 2026 at 01:47 PM EDT)
4 min read
Source: arXiv

Source: arXiv - 2604.21898v1

Overview

Onboarding new researchers to high‑performance computing (HPC) and other research‑computing services is a chronic pain point at many universities and national labs. Chaturvedi et al. propose a repeatable framework that turns ad‑hoc “how‑to” guides into a structured, institutionalized onboarding process. They validate the approach with a real‑world deployment at Washington University’s Research Infrastructure Services, showing measurable improvements in user satisfaction and support efficiency.

Key Contributions

  • A reusable onboarding framework that maps best‑practice activities (documentation, training, mentorship, feedback loops) onto the lifecycle of a new user.
  • A concrete implementation guide (roles, artifacts, timelines) that can be adopted by any research‑computing organization regardless of size or platform.
  • Empirical evaluation through a case study at Washington University, including quantitative metrics (ticket volume, time‑to‑first‑run) and qualitative user feedback.
  • Open‑source tooling (templates, checklists, and a lightweight dashboard) released under a permissive license to encourage community adoption.
  • A set of design principles (simplicity, scalability, and continuous improvement) that bridge the gap between academic documentation culture and industry‑grade service delivery.

Methodology

  1. Problem Scoping – Interviews and surveys with faculty, graduate students, and support staff identified the most common onboarding bottlenecks (e.g., account provisioning, environment setup, software discovery).
  2. Framework Design – The authors distilled the onboarding journey into four phases: Pre‑Arrival, First‑Login, Skill‑Building, and Ongoing Support. For each phase they defined required artifacts (e.g., welcome email template, “quick‑start” notebook) and responsible roles (e.g., System Admin, Domain Expert, Mentor).
  3. Pilot Implementation – The framework was rolled out in a controlled pilot covering ~120 new users over a semester. Automation scripts (Ansible playbooks) handled account creation, while a lightweight web portal displayed personalized checklists and training resources.
  4. Data Collection – Metrics collected included: number of support tickets per user, average time from account creation to first successful job, and post‑onboarding survey scores.
  5. Analysis – Statistical comparison against a baseline cohort (previous semester) quantified the impact, and thematic analysis of open‑ended survey responses highlighted usability improvements.

Results & Findings

MetricBaseline (pre‑framework)Post‑implementation
Avg. tickets per new user4.22.1 (‑50%)
Time to first successful job (hrs)12.86.3 (‑51%)
Onboarding satisfaction (1‑5)3.14.4 (+42%)
Mentor‑to‑user ratio1:301:12 (more personalized)
  • Reduced support load: The halved ticket count freed staff to focus on advanced troubleshooting and service enhancements.
  • Faster time‑to‑productivity: Researchers could run their first analysis within a workday, accelerating project timelines.
  • Higher perceived quality: Survey comments repeatedly mentioned “clear next steps” and “helpful checklists” as game‑changers.

The case study also uncovered a “feedback loop” effect: as users completed onboarding tasks, they contributed back improvements to documentation, further reducing friction for subsequent cohorts.

Practical Implications

  • For HPC administrators: Adopt the framework’s checklist and role matrix to formalize onboarding SOPs, reducing reliance on tribal knowledge.
  • For DevOps teams: Leverage the provided Ansible playbooks or adapt them to Terraform/Cloud‑Init for automated account provisioning and environment provisioning.
  • For research groups: Assign a “technical mentor” early in the project lifecycle; the framework’s low‑overhead mentor onboarding can be integrated into grant onboarding budgets.
  • For software vendors: The structured onboarding pipeline offers a natural integration point for SaaS tools (e.g., JupyterHub, CI/CD pipelines) that can be auto‑registered as part of the user’s first‑login checklist.
  • For institutions: The measurable ROI (ticket reduction, faster research output) provides a compelling business case for investing in onboarding infrastructure rather than treating it as an after‑thought.

Limitations & Future Work

  • Scope limited to a single institution: While the framework is designed to be generic, the validation was performed at one university; cross‑institutional studies are needed to confirm scalability.
  • Focus on Linux‑based HPC: The current artifacts assume a Unix‑like environment; extending the model to cloud‑native or Windows‑based research platforms will require additional adapters.
  • Mentor availability: The improved mentor‑to‑user ratio may be hard to sustain in larger organizations without dedicated onboarding staff.
  • Future directions include: (1) integrating AI‑driven chat assistants for on‑demand troubleshooting, (2) expanding the framework to cover data‑management onboarding, and (3) open‑sourcing a full “onboarding as code” library to enable community contributions.

Authors

  • Ayush Chaturvedi
  • Rob Pokorney
  • Elyn Fritz-Waters
  • Charlotte Rouse
  • Gary Bax
  • Daryl Spencer
  • Craig Pohl

Paper Information

  • arXiv ID: 2604.21898v1
  • Categories: cs.OH, cs.SE
  • Published: April 23, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »