Weekly #51-2025: AI Coding Agents and Engineering Culture, 0.1x Engineers
Source: Dev.to
How Good Engineers Write Bad Code at Big Companies
Why does sloppy code emerge from teams packed with strong engineers? It’s structural: short tenures, frequent reorgs, and internal mobility mean most changes are made by “beginners” to a codebase or language. A few “old hands” carry deep knowledge, but their review bandwidth is limited and informal. The median productive engineer is competent but racing deadlines in unfamiliar systems—so hacky fixes pass, get lightly reviewed, and ossify.
Compound Engineering: When Agents Write All Your Code
What happens when 100% of your code is written by AI agents? Every outline follows a four‑step loop—Plan, Work, Assess, Compound—that turns agent output into a learning system. The twist isn’t automation alone; it’s compounding. Each bug, test failure, and design insight gets documented and reused by future agents, so every feature makes the next one easier to build.
Why Write Engineering Blogs: Career Signal, Community, and Clear Thinking
Why do many engineers keep blogging after the initial hype fades? Some started to build visibility or share a product; others simply wanted to teach, document hard‑won lessons, or make sense of complex systems. A recurring theme is permanence and impact: structured writing creates a public artifact that outlives chat, helps others solve real problems, and quietly advocates for its author.
Working with Q: A Defensive Protocol for Coding Agents
How should an AI coding agent think when mistakes compound and can brick a project? A GitHub Gist notes a clear, testable protocol for “defensive epistemology” in software work:
- Make explicit predictions before every action.
- Compare outcomes after execution.
- Stop to update the model whenever reality surprises you.
The core rule is blunt: reality doesn’t care about your mental model; all failures live in that gap. By writing DOING/EXPECT/IF YES/IF NO before tool calls and RESULT/MATCHES/THEREFORE after, agents and humans expose reasoning, catch wrong assumptions early, and prevent cascading errors.
The Rise of the 0.1x Engineer: Curators in the Age of Coding Assistants
Are “10x engineers” still the goal when AI can spray code into any codebase? A recent post argues the real leverage now comes from “0.1x engineers” — the people who resist prompting first and instead set patterns, prune cruft, and keep systems coherent. As coding assistants make it trivial to add code, they also make it trivial to add bloat, spaghetti structure, and LLM leftovers.