[Paper] 'Can you feel the vibes?': An exploration of novice programmer engagement with vibe coding

Published: (December 2, 2025 at 08:32 AM EST)
3 min read
Source: arXiv

Source: arXiv - 2512.02750v1

Overview

The paper investigates “vibe coding,” a new way of building software by feeding natural‑language prompts to generative AI instead of writing code line‑by‑line. By running a one‑day hackathon with 31 undergraduate novices from both technical and non‑technical majors, the authors explore how this approach affects creativity, collaboration, and learning in a low‑stakes setting.

Key Contributions

  • Empirical snapshot of novice engagement with AI‑driven, prompt‑based development in a real‑time hackathon.
  • Identification of workflow patterns: teams combined multiple AI tools in pipelines, using human judgment to stitch together and refine outputs.
  • Evidence that vibe coding lowers entry barriers, enabling rapid prototyping and cross‑disciplinary teamwork.
  • Insights into learning outcomes, notably the emergence of prompt‑engineering skills and confidence gains despite limited exposure to traditional software‑engineering practices.
  • Design recommendations for future educational events that leverage vibe coding while mitigating pitfalls such as premature idea convergence and uneven code quality.

Methodology

The researchers organized a 9‑hour hackathon at a Brazilian public university. Participants (31 undergraduates from computing and non‑computing fields) formed nine mixed‑experience teams. Data collection combined three methods:

  1. Direct observation of team activities and tool usage.
  2. Exit survey capturing self‑reported confidence, perceived learning, and satisfaction.
  3. Semi‑structured interviews conducted after the event to dig deeper into participants’ experiences, challenges, and reflections.

The mixed‑methods approach allowed the authors to triangulate quantitative survey results with qualitative narratives, producing a holistic view of how novices interact with vibe‑coding tools under time pressure.

Results & Findings

  • Rapid prototyping: All teams produced a functional demo within the 9‑hour window, demonstrating that natural‑language prompts can accelerate early‑stage development.
  • Prompt‑engineering emergence: Participants quickly learned to craft and iterate prompts, treating prompt design as a core skill rather than an afterthought.
  • Cross‑disciplinary collaboration: Non‑technical members contributed domain knowledge and UI ideas, while technical members focused on prompt refinement and debugging.
  • Workflow sophistication: Teams built pipelines that chained several AI services (code generators, test generators, UI designers) and manually edited outputs where needed.
  • Quality trade‑offs: The generated code often required substantial post‑processing; teams reported “premature convergence” on ideas, limiting exploration of alternatives.
  • Learning impact: Survey results showed a statistically significant boost in confidence to experiment with AI‑assisted coding, though participants acknowledged limited exposure to formal software‑engineering practices (e.g., version control, testing).

Practical Implications

  • Low‑cost onboarding: Organizations can run short, inclusive hackathons to introduce developers, designers, and domain experts to AI‑assisted coding without demanding deep prior programming knowledge.
  • Prompt‑engineering curricula: Educational programs should treat prompt design as a teachable skill, integrating it alongside traditional coding modules.
  • Hybrid pipelines: Teams can adopt the “human‑in‑the‑loop” model demonstrated in the study—use AI for scaffolding and let developers focus on validation, security, and integration.
  • Rapid MVP creation: Start‑ups and product teams can leverage vibe coding for quick proof‑of‑concepts, especially when time‑to‑market is critical.
  • Tool‑agnostic best practices: The findings suggest that scaffolding (e.g., checklists for code review, explicit divergence prompts) can mitigate the tendency to settle on the first AI suggestion, leading to higher‑quality outcomes.

Limitations & Future Work

  • Sample size & context: The study involved a single 9‑hour event with 31 participants from one university, limiting generalizability across cultures, skill levels, or longer‑term projects.
  • Short‑term assessment: Learning gains were measured immediately after the hackathon; longitudinal studies are needed to see if skills persist.
  • Tool diversity: While teams used multiple AI services, the research did not systematically compare the impact of specific tools or model versions.
  • Future directions: The authors propose larger‑scale, multi‑session studies, integration of formal software‑engineering practices into vibe‑coding curricula, and experiments with scaffolding techniques that explicitly encourage divergent ideation and rigorous output validation.

Authors

  • Kiev Gama
  • Filipe Calegario
  • Victoria Jackson
  • Alexander Nolte
  • Luiz Augusto Morais
  • Vinicius Garcia

Paper Information

  • arXiv ID: 2512.02750v1
  • Categories: cs.SE, cs.HC
  • Published: December 2, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »