How I built a browser game portal using AI and what I had to fix myself

Published: (January 14, 2026 at 11:09 AM EST)
6 min read
Source: Dev.to

Source: Dev.to

Why I Chose a Browser‑Based Vibe‑Coding Tool and React

The vibe‑coding tool I started with lets you quickly create running web applications from an initial prompt:

I want to build a puzzle games website where people 
can play different brain teasers and logic puzzles. 
Users should be able to create accounts, track their scores, 
and compete on leaderboards. 
Use react and vite.

I deliberately chose a tech stack I was already somewhat familiar with so I could judge the quality of the generated code – and it was pretty impressive!

What impressed me during the first experiments

  • The first version was already pretty good, both visually and technically.
  • The tool understood my prompts and executed them nicely.
  • The interface looks like VS Code in the browser and lets me switch between source code and the running app.

First Successes

As mentioned, the game portal was up and running quickly inside the AI coding tool. I could tweak the overall appearance with vague, non‑technical prompts:

Use different colors. It should be bright and friendly.
Also, make sure it looks good on mobile screens.

The colors, game logos, and text the AI generated looked great, as the screenshot below shows.

The game portal landing page

One of the first games the generator added without me specifically asking was a memory game and Minesweeper. Both worked instantly and only needed minor adjustments (colors, button styles, responsive layout) – all done with non‑technical prompts:

Add buttons for restarting the game and starting the 
game of the day. While all games are randomly initialized, 
the game of the day should be the same game for all users.

The AI invented a “seed” concept and implemented a pseudo‑random seed based on the current date. It worked like a charm!

Adding variations (different field sizes, etc.) was just a matter of asking the AI to do it. I was truly impressed.

The memory game

These early wins were possible because the games were simple. Things changed once I tried to build more complex games.

Where AI Struggled: Complex Games & Visuals

The AI consistently produced React game components with roughly the following structure:

import React, { useState, useEffect } from "react";
// … types and constants

export function Game({ seed, gridSize: initialSize }: GameProps) {
  // state hooks
  const [isRunning, setIsRunning] = useState(false);
  // …

  // set up grid
  const initializeGame = () => {
    // …
  };

  // side effects
  useEffect(() => {
    // …
  }, []);

  // handlers
  const onNewGame = () => {
    // …
  };

  // JSX
  return (
    
      {/* … */}
    
  );
}

When a game became more complicated, this pattern introduced several problems:

  • Lengthy component – hard to read and understand.
  • Mixed responsibilities – game logic and presentation logic are tangled.
  • Untestable logic – difficult to write isolated unit tests for the core mechanics.
  • Low reusability – the same logic can’t be shared across different games or components.

These issues made the portal harder to maintain, and even small bugs became a pain to fix for both me and the AI.

I learned this the hard way while creating a pipes‑connection game (connect a grid of pipes so water can flow from start to finish).

The pipes game

You can try the Pipes game here: .

Issues with the initial version

  • The pseudo‑random initial solution didn’t generate correctly.
  • Win‑condition detection failed.
  • Connected pipes weren’t highlighted in a different color.
  • SVG outlines for the pipes were missing.

How I Resolved Them

  1. Ask the AI to refactor the component – split the logic into a separate file.
  2. Create a proper TypeScript class (or a set of pure functions) that encapsulates all game mechanics.
  3. Write unit tests for the new class to verify seed generation, win detection, and pipe‑coloring logic.
  4. Keep the React component thin – it now only handles rendering and user interaction, delegating all heavy lifting to the extracted logic module.

This refactor dramatically improved readability, testability, and maintainability, and it also made future extensions (new pipe shapes, difficulty levels, etc.) much easier to implement.

Takeaways

  • Start simple – AI generators shine when the target is a small, self‑contained UI component.
  • Don’t let the AI own the whole architecture – extract core logic into reusable, testable modules.
  • Iterate with prompts – non‑technical, high‑level prompts work great for styling and minor tweaks, but for complex behavior you’ll need to guide the AI toward better separation of concerns.
  • Treat generated code as a draft – always review, refactor, and add tests before shipping.

In the next posts I’ll dive deeper into the refactoring process, share the final architecture of the portal, and discuss how I’m using AI to keep the codebase healthy as the project grows. Stay tuned!

With State and Methods

  • Separate the solution generator into a separate file.
  • Create a reusable component for the game buttons and use it across all games.

Learning Process

  • Started reading and actually understanding the generated code.
  • Switched from the browser‑based code generator to a general‑purpose LLM chat.
  • Asked the chat about typical data structures and algorithmic approaches for the problems I needed to solve.

Implementation

  • Wrote my own implementation of the game logic and the solution generator.
  • Realized there was no way to avoid actually understanding what I was doing.
  • Used the LLM chat to quickly learn the required data structures and algorithms—no need to read a paper or university slides.
  • Leveraged the LLM to create implementations the way I needed them and to discuss alternative approaches.

SVG Rendering

Regarding the SVGs used to render the pipe, I saw no alternative to working closely and iteratively with an LLM chat to create the implementation the way I liked it.

Key Takeaways and Conclusions

  • Using AI‑assisted coding tools was fun and produced first results quickly. It created a prototype ready for testing with actual users.
  • AI code generators still have limitations.
  • It was critical for me to step in where the AI failed; together we refactored and simplified the code.
  • With another AI tool, I found an implementation that was correct, reliable, and maintainable.

Even in the age of AI code generators, good engineering practices—clean code, test automation, and thoughtful architecture—still matter, perhaps even more than ever.

AI can help me (and probably you) code faster, but only with my engineering judgment can I build maintainable, stable, and secure software.

What Should I Write About Next?

For the next post in this series, I’m considering diving deeper into one of these areas:

  • Security considerations for browser‑based games

    • Cheating, attacks, runaway cloud costs
  • Low‑cost end‑to‑end architecture

    • From AI‑generated React code to a deployable, maintainable production setup
  • Privacy‑compliant user analytics

    • How I measure player behavior without sharing user data with third parties
  • Acquisition

    • How people actually find and start playing the games with a limited budget

Let me know which one you’d find most useful!

Back to Blog

Related posts

Read more »

Rapg: TUI-based Secret Manager

We've all been there. You join a new project, and the first thing you hear is: > 'Check the pinned message in Slack for the .env file.' Or you have several .env...

Technology is an Enabler, not a Saviour

Why clarity of thinking matters more than the tools you use Technology is often treated as a magic switch—flip it on, and everything improves. New software, pl...