What Claude Code Chooses
Source: Hacker News
What Claude Code Actually Chooses
We pointed Claude Code at 2,430 real repositories (no tool names in any prompt, open‑ended questions only).
- Models: Sonnet 4.5, Opus 4.5, Opus 4.6
- Project types: 4
- Tool categories: 20
- Extraction rate: 85.3 % (2,073 parseable picks)
- Model agreement: 90 %
- Within‑ecosystem picks: 18 of 20 categories
Key finding: Claude Code builds custom solutions far more often than it recommends off‑the‑shelf tools. “Custom/DIY” appears in 12 of the 20 categories and accounts for 252 total picks—more than any single tool.
Examples
- “Add feature flags” → builds a config system with environment variables and percentage‑based rollout (instead of LaunchDarkly).
- “Add auth” in Python → writes JWT + bcrypt from scratch.
When a tool is chosen, the choice is decisive: GitHub Actions (94 %), Stripe (91 %), shadcn/ui (90 %).
Update: Sonnet 4.6 was released on Feb 17 2026; the benchmark will be rerun against it.
Headline Findings
Build vs Buy
- In 12 of 20 categories Claude Code prefers custom implementations.
- Total Custom/DIY picks: 252 (feature flags via config files + env vars, Python auth via JWT + passlib, caching via in‑memory TTL wrappers).
| Category | Custom/DIY share |
|---|---|
| Feature Flags | 69 % |
| Authentication (Python) | 100 % |
| Authentication (overall) | 48 % |
| Observability | 22 % |
The Default Stack
When Claude Code does recommend a tool, the defaults are heavily JavaScript‑centric.
| Rank | Tool | Pick rate |
|---|---|---|
| 1 | GitHub Actions | 93.8 % (152/162) |
| 2 | Stripe | 91.4 % (64/70) |
| 3 | shadcn/ui | 90.1 % (64/71) |
| 4 | (JS ecosystem) | 100 % (86/86) |
| 5 | (unspecified) | 68.4 % (52/76) |
| 6 | Zustand – State Management | 64.8 % (57/88) |
| 7 | Sentry – Observability | 63.1 % (101/160) |
| 8 | — | 62.7 % (64/102) |
| 9 | — | 59.1 % (101/171) |
| 10 | — | 58.4 % (73/125) |
Against the Grain
| Area | Primary picks | Notable mentions |
|---|---|---|
| State Management | 0 primary | 23 mentions; Zustand chosen 57 times |
| API Layer | None | Framework‑native routing preferred |
| Testing | 4 % primary | 31 alternative picks; known tools not chosen |
| Package Manager | 1 primary | 51 alternative picks; still well‑known |
The Recency Gradient
Newer models gravitate toward newer tools. Percentages are shown within each ecosystem.
- JS ORM – Sonnet 4.5: 79 % Drizzle → Opus 4.6: 0 % (replaced by Drizzle at 100 %).
- Python Jobs – Sonnet 4.5: 100 % (no picks) → Opus 4.6: 0 % (replaced by FastAPI BackgroundTasks at 44 %; rest Custom/DIY).
- Python Caching – Sonnet 4.5: 93 % Redis → Opus 4.6: 29 % (Custom/DIY rises to 50 %).
The Deployment Split
JS Frontend (Next.js + React SPA)
- 86 / 86 deployment picks → Vercel (primary, zero‑config).
Python Backend (FastAPI)
- Expected cloud providers (AWS, GCP, Azure) → Railway chosen 82 % of the time.
Frequently recommended as alternatives
- Netlify (67 alt)
- Cloudflare Pages (30 alt)
- GitHub Pages (26 alt)
- DigitalOcean (7 alt)
Mentioned but never recommended (0 alt picks)
- AWS Amplify (24 mentions)
- Firebase Hosting (7 mentions)
- AWS App Runner (5 mentions)
Truly invisible (rarely mentioned)
- AWS (EC2/ECS)
- Google Cloud
- Azure
- Heroku
Example query: “Where should I deploy this? (Next.js SaaS, Opus 4.5)”
- Vercel – recommended with install commands and reasoning.
- Netlify – offered as a comparable alternative.
- AWS Amplify – noted for existing AWS ecosystems.
Where Models Disagree
All three models agree in 18 of 20 categories within each ecosystem. The remaining categories show genuine shifts.
| Category | Sonnet 4.5 | Opus 4.5 | Opus 4.6 |
|---|---|---|---|
| ORM (JS) | Prisma 79 % | Drizzle 60 % | Drizzle 100 % |
| Jobs (JS) | BullMQ 50 % | BullMQ 56 % | Inngest 50 % |
| Jobs (Python) | Celery 100 % | FastAPI BgTasks 38 % | FastAPI BgTasks 44 % |
| Caching (Cross‑language) | Redis 71 % | Redis 31 % | Custom/DIY 32 % |
| Real‑time (Cross‑language) | SSE 23 % | Custom/DIY 19 % | Custom/DIY 20 % |
Dig into the Data
The full dataset includes category deep‑dives, phrasing‑stability analysis, cross‑repo consistency metrics, and market‑implication commentary.