Heaven and Hell: What 'Vibe Coding' Taught Me
Source: Dev.to
Project 1: The Slack App That Humbled Me (Hell → Heaven)
Week 1 – The “Vibe Coding” Disaster
I decided to build a Slack application. The infrastructure was done in two hours. The application code? That’s where hell began.
My approach was simple: describe what I wanted to AI, copy‑paste the generated code, deploy, and ship. It didn’t work. Error after error. I kept feeding the errors back to the model, got new code, redeployed, and repeated the cycle. After one week of this back‑and‑forth I couldn’t move forward a single step. I was stuck in what I now call “vibe coding hell”—blindly following AI without understanding the fundamentals.
Week 2 – The Breakthrough
I stopped, took a breath, and actually read Slack’s official SDK documentation. I learned:
- What features Slack offers
- How the SDK modules work
- The proper workflow for Slack apps
Armed with that knowledge, I returned to the AI, but this time I gave it clear architectural instructions based on my new understanding. The app was finished in three days (including learning time and one complete rewrite when I mis‑interpreted terminology). After that, any new feature took minutes to implement.
The Lesson
You can’t outsource understanding to AI. Software design and architectural decisions still come from humans. AI is a powerful assistant, but you need domain knowledge to guide it effectively. The key insight: AI amplifies your capabilities when you provide the right direction.
Project 2: Building a Production‑Ready LLM Platform in 3 Days (Pure Heaven)
I had an idea: build a complete inference platform to host LLM models and fine‑tuned variants.
- Knowledge level: I learned the term “inference” the night before I started.
- Timeline: Three days to production‑ready.
What I Built
- Infrastructure (≈ 1 hour via Terraform): Complete cloud stack
- Frontend Web UI: Full‑featured interface
- Two Backend Inference Services: Hosting different LLM models
- Automated Training Pipeline: End‑to‑end data processing
- Performance Optimization: Reduced latency from 28–30 s to 3–4 s per query (pure software tuning, no hardware upgrades)
The Breakthrough
With my SRE background (system architecture, performance optimization, infrastructure patterns), I could guide the AI effectively. I understood the options the model presented and could make informed decisions about:
- Architecture patterns
- Performance trade‑offs
- Infrastructure design
- System integration
AI drove ~ 80 % of the implementation, but I drove 100 % of the architectural decisions. This is the power of combining domain expertise with AI assistance—you become a force multiplier.
What I’m Starting to Realize
The Role Feels Different Now
I’m no longer writing as much code manually. Instead, I spend more time:
- Thinking about architecture and design
- Giving AI clear direction on what I want
- Validating and refining what it generates
- Making trade‑off decisions
It reminds me of the shift from manually configuring servers to writing Infrastructure as Code. The skill set changed, but the value didn’t diminish.
My Domain Knowledge Became More Important
The Slack app failed because I tried to skip learning the fundamentals.
The LLM platform succeeded because my SRE background gave me the mental models to guide AI effectively.
AI doesn’t replace what you know—it multiplies it.
I’m Becoming a “Conductor” More Than a “Coder”
I’m spending less time on implementation details and more on:
- Designing the overall system
- Choosing the right approaches
- Ensuring quality and performance
- Making sure pieces fit together
Like an orchestral conductor, I don’t play every instrument, but I ensure everything works together harmoniously.
The Work Is Shifting, Not Disappearing
Engineering work is moving toward:
- Higher‑level problem solving
- Architecture and design decisions
- System integration and orchestration
- Performance, security, and quality validation
The grunt work of writing boilerplate code? AI handles a lot of that now.
My Personal Takeaway
I’m an SRE who can now ship full‑stack applications in days—not because I became a better programmer, but because I learned to combine my domain expertise with AI capabilities.
From one week stuck in vibe‑coding hell to shipping a production LLM platform in three days, the journey was enabled by AI. The key difference? Understanding that AI doesn’t replace expertise; it amplifies it. My SRE background in system architecture and performance optimization became more valuable, not less, when paired with AI’s implementation power.
What excites me: I can now build things that were completely outside my skill set just months ago.
What I learned: Domain knowledge + AI assistance = force multiplier. Skip the fundamentals, and you’re just spinning your wheels.
What’s Next?
I’m planning to build a full‑stack application in Rust—a language I’ve never learned. This will test whether the principles I’ve discovered apply across domains. Stay tuned.
Follow me for more cloud architecture insights, SRE war stories, and practical lessons on thriving in the AI era.
Previous article: AWS SRE’s First Day with GCP: 7 Surprising Differences