Why 'just prompt better' doesn't work
Source: Hacker News
Earlier this week we published “Coding assistants are solving the wrong problem”, which made it to the Hacker News front page and drew responses from developers across industries and roles.
We learned a lot from the 40+ survey responses that poured in, as well as the heated debate on how coding assistants impact software development. The latter deserves its own article—a separate post curating best practices in coding‑agent setup is in the works.
For this follow‑up article, however, we will focus on the main findings that connected the dots for us: why AI adoption on average led to an increase in time spent on review, rework, and re‑alignment tasks commensurate with the development time it saves (Atlassian, 2025). We will use firsthand accounts from commentators to illustrate friction points along the entire product lifecycle.
Finding 1: Communication friction is the distinguishing pain point
We asked developers:
- When do you discover that the code doesn’t work the way people think it does?
- Who needs to know when you find a technical constraint?
Left: A third of technical constraints were already in a product conversation. Right: 70 % of those constraints need to reach people who don’t regularly interact with the codebase.
Taken together, these charts reveal the deeply cross‑functional nature of the problem we are dealing with.
Observation 1 – Constraints discovered during planning
One‑third of constraints are discovered during planning sessions (sprint planning, product‑engineering syncs, 1:1s with PMs). That sounds good—issues can be addressed early, right? Not so fast.
The issue is all context. Small decisions have to be made by design/engineering based on discovery of product constraints, but communicating this to stakeholders is hard, time‑consuming, and often ineffective.
Why surfacing constraints is challenging
Commentators point to two general categories of challenges:
- Articulating complex technical dependencies on‑the‑fly
- Translating those dependencies into business impact that will move the needle on product‑scoping decisions.
“A senior engineer might recall that a ‘simple filter’ touches three microservices. A mid‑level one won’t — not because they lack skill, but because the cognitive load is unreasonable.”
“I can push back, sometimes it works, sometimes they have political reasons to disregard technical problems, which means it will always be my problem.”
When 70 % of technical constraints need to reach people who don’t regularly interact with the codebase, known issues often go unresolved until the pain becomes evident further down the line.
Observation 2 – Constraints discovered during implementation
What about the 50 % of constraints that were discovered only during implementation? Why are they so numerous, and how are they addressed?
- Product meetings often have a “hand‑wavy” approach to details. The small, out‑of‑place assumptions are what slow down a project the most.
- You only discover these issues once you start coding them by hand. You go through variables and function calls and suddenly remember a process elsewhere changes.
- Since product meetings cover feature specifications in broad strokes, it is only after some technical scaffolding is in place that the exact clash between engineering reality and product requirements becomes apparent. The devil, in other words, is in the detail.
Respondents’ diagnosis of why downstream realignment is expensive
- “Product not always available to answer questions during implementation as we always discover issues.”
- “[Difficulty] confirming that ideas and goals are communicated successfully and everyone involved understands them—especially at the stage where we confirm that everything is completed (QA, UAT, etc.).”
These frustrations are further aggravated by fragmented documentation of decisions made upon discovered constraints:
| Communication method | Percentage |
|---|---|
| Share constraints via copy‑paste to Slack | 52 % |
| Mention verbally — no written record | 25 % |
| No persistent artifact | 35 % |
In summary, the crux of developer frustration lies in communication friction. Developers intuit problems that might arise but either struggle to get that across to decision‑makers or lack the evidence to make their case stick—both of which they know will be far more costly downstream (repeated conversations and reworks).
The message is loud and clear: what is needed is not another code‑analysis or documentation tool, but ammunition to drive cross‑functional alignment upstream.
Finding 2: The problem is not that AI can’t write good code — it’s that it can’t refuse to write bad ones
The impetus behind our initiative was a dawning realization that AI adoption will worsen the problems laid out above. By combing through user comments we clarified the exact mechanics by which coding assistants amplify the cost of misalignment.
We start with an insightful comment that highlights the limitations of relying on AI for software development:
“Where AI fails …” (original comment continued in the full survey data)
(The remainder of this section will expand on the comment, illustrate concrete examples of AI‑generated code that introduces hidden constraints, and propose ways to mitigate the downstream communication overhead.)
AI‑Assisted Coding: Why “Just Prompt Better” Isn’t Enough
“Us is when we build new software to improve the business. The tasks are never really well defined. Sometimes developers come up with a better way to do the business process than what was planned for. AI can write the code, but it doesn’t refuse to write the code without first being told why it wouldn’t be a better idea to do X first.” — Quothling
Coding agents are designed to be accommodating. They don’t push back against prompts because they lack both the authority and the broader context to do so. At most, they will ask for clarification on what was specified; they won’t say, “Wait, have you considered doing X instead?” A human developer would raise such a flag. An LLM produces plausible output and moves on.
This trait can be useful for a virtual assistant, but it makes for a poor engineering teammate. The willingness to engage in productive conflict is a core part of good engineering—it broadens the search in the design space of ideas.
“Just Prompt Better!” – Why It Falls Short
Several commentators have responded with “just prompt better!”
- An LLM will do exactly what you ask it to do. If you tell it to ask questions, poke holes in your requirements, and not jump straight to code, it will comply.
- However, our survey shows that 50 % of developers discover constraints during implementation. The “just prompt better” stance assumes the prompter already knows the exact ways product and technical constraints might conflict—yet we have repeatedly found that many constraints surface only iteratively through cross‑functional dialogue.
Humans can imagine scenarios where a process might break. Claude (and other LLMs) can do the same only when the breakage originates inside the described process and you explicitly specify it. They cannot infer future issues arising from an external process unless you describe that external process in detail — adithyassekhar.
Lessons from Real‑World Experiments
The Cursor team recently experimented with long‑running autonomous coding agents. Their initial attempts failed because agents interpreted instructions literally and “went down obscure paths” — Cursor blog. Human intervention during the planning phase was required to inject the holistic understanding necessary for agents to behave as expected.
The problem is even more pronounced in enterprise environments, where:
- Business requirements are ill‑specified yet precision is critical.
- The full specification is not contained in a single document; it is scattered across the codebase, the product manager’s mental model, the marketer’s promises, and multiple Slack threads.
Automating code generation merely shifts the exchange of context downstream—away from the decision‑making locus—so the crucial discovery phase is bypassed.
The Core Tension
In short, AI speeds up implementation but bypasses the process through which constraints are discovered, limiting the product context that the model needs to produce good results. It’s a classic chicken‑or‑egg problem.
Squeezing the Juice Out of the Product Meeting
Our user research highlights a central tension:
- Technical constraints often require cross‑functional alignment, yet communicating them during stakeholder meetings is hard because of context gaps and cognitive load.
- Code generation cannibalizes the implementation phase, where additional constraints were previously uncovered, shifting the discovery burden to code review—where it’s even harder and more expensive to resolve.
A Way Forward
The context problem must be tackled at its inception—during product meetings, where cross‑functional participants can surface ideas without incurring rework costs. If AI handles the implementation, the planning phase must absorb the discovery work that manual implementation used to provide.
Achieving this will not be easy. We face a uniquely interdisciplinary challenge that touches both:
- Human aspects – creating artifacts that non‑technical team members can easily digest to drive alignment.
- Technical aspects – performing counterfactual analysis to surface potential gaps.
Nevertheless, we are optimistic. As code‑generation models improve, we can bootstrap them for the inverse problem: surfacing constraints. Our role is to build tooling that ensures human developers—who excel at creative problem‑solving—remain on top.
If this mission resonates with you, please drop by and share suggestions or feedback at our Google Group. We would especially appreciate help in architecting our agent harness!