Rebuilding an Old Website Using Kiro (and What Went Wrong)
Source: Dev.to
I rebuilt an old website I hadn’t touched in 7 years using Kiro and technologies I had never tried. I went from implementing auth in less than an hour, to spending six days figuring out a simple CRUD with file upload, to a very productive stretch where I finished the remaining CMS and site features from scratch without a framework in four days, and finally spending another three days painfully cleaning up AI‑generated garbage just to make sure everything actually runs in production.
You’ve probably seen plenty of similar posts about AI coding assistants already, but here’s my experience anyway.
Learn the concept first when you know nothing about what you’re building
If you’re trying something new, take time to actually learn the concept. In my case, I spent days stuck on a simple CRUD because I was totally unfamiliar with how Cloudflare R2, Workers, and Cloudflare Image Resizing work, so I couldn’t direct the AI to do what I wanted properly. After reading the docs and understanding the concepts, I could finally make sense of the mess I had produced, clean it up, and get it working.
Spec‑driven development is powerful, but…
I found many cases where I’m very clear on what I want to do, and “vibe coding” without writing any spec is much more productive. For dev teams, this means requirement specs for AI can’t replace specs written by PM or QA—at least not for the moment.
Keep specs small
When using spec‑driven development (Kiro style), I can only stay productive with at most four requirements. Two to three requirements per spec works much better. More than that, I lose the cognitive capacity needed to read the requirements, design the solution, and understand the generated code. Split your spec. Period.
Steering files are very important
The best hack I found is simply taking time to keep steering files updated. Whenever I finish implementing something, updating the steering files helps the AI understand things that were previously ambiguous or that I have already changed my mind about. Before this, I often had to correct the output. With updated steering files, the result was much closer to what I wanted from the start. I didn’t measure it formally, but the time‑saving difference was very noticeable.
AI does silently break things
Even with steering files and Kiro searching and reading the codebase before implementing tasks, it still messed things up. It could reinvent implementations instead of reusing existing code, leading to hidden instability; add conflicting CSS rules that go unnoticed until you actually build and deploy; or inject overly strict security configs that make the app unusable. I also found Kiro has a tendency to over‑engineer things.
To understand what actually happened, I committed to git frequently so I could compare the overall file diff. This was much easier than trying to follow small, scattered diffs shown during task execution.
Checkpointing is underrated
Kiro has a feature I really like: checkpointing. If I was in the middle of implementing something and changed my mind, I could easily restore both the chat and the code to a previous point. It’s basically an undo button for coding. Being able to undo and retry with a different approach is extremely valuable, and honestly, it reminded me why I could take some random club from Serie C to European champions in Football Manager back when I still had a life.
ChatGPT is the steroid booster for Kiro
Maybe it’s because I’m using auto model selection and sometimes get a weaker model, but I found ChatGPT’s reasoning when engineering a solution is often better. Kiro tends to accept and follow whatever you say, quickly jumping into implementation, which makes debugging feel like trial and error. With ChatGPT, the chat‑based interaction gives you time to think things through properly. In the end, the best setup for me was me and ChatGPT co‑bossing Kiro to actually get work done.
Humans beat AI in short‑range sprinting
Did I say vibe coding can be more productive? Sometimes the most productive thing is implementing or fixing the code myself, especially when I’m already very clear on what needs to be written. I found that humans—at least me—retain context better than AI for a small project, so I don’t need to keep scanning existing code just to get going. I simply can’t type as fast as the AI.
Before and after
Old site
(screenshot or description of the original site)
Rebuild
(screenshot or description of the rebuilt site)
More importantly, I now have a solid foundation to assist others and scale AI coding‑assistant usage into something more complex.