I Let AI Write Most of My Code for a Month. Here’s What Happened.

Published: (January 17, 2026 at 01:45 PM EST)
9 min read
Source: Dev.to

Source: Dev.to

It Started Innocently

I was staring at a blank editor, trying to build a simple CLI tool—something that should take a few days max. But you know how it goes: stack‑overflow rabbit holes, documentation that reads like legalese, the constant context‑switching between “what am I building?” and “how do I even do this?”

So I did what many of us do now. I fired up an AI assistant.

Prompt: “Write a CLI tool in Golang that accepts a port number and kills any process running on that port.”

A couple of minutes later I had working code. It wasn’t just working—it was polished: error handling, helpful messages, cross‑platform compatibility. The stuff that would have taken me hours to get right.

Cool, right?

The High‑Speed Ride

The obvious: AI makes you feel unstoppable

  • Projects that used to take a week suddenly take two days.
  • The friction just… disappears.
  • Stuck on a gnarly regex? AI handles it.
  • Can’t remember that one library API? AI shows you.
  • That weird bug that’s been driving you crazy? AI suggests three possible fixes.

The momentum is the real drug. You actually finish things. You don’t abandon projects halfway because you hit a wall. You ship something, feel good about it, and move on to the next thing.

For someone who struggles with overwhelm and perfectionism (guilty), this feels revolutionary.

The Downside Appears

A side‑project that turned sour

Two weeks in I shipped a simple tool that pulled data from an API and displayed it nicely. AI had written most of it—API calls, data parsing, error handling, the whole deal.

It worked. I was proud. I moved on.

A week later someone reported a bug: the data wasn’t displaying correctly in a specific case.

I opened the code and stared at it. I realized I had no idea how it worked. I could tell you what it did, but not why it worked that way, nor could I debug it because I didn’t understand the decisions AI had made.

So I did what felt natural—I asked AI to fix the bug. It did. The bug disappeared. But I felt… weird. I’d shipped a fix I didn’t understand, for a project I didn’t understand, to solve a problem I didn’t fully grasp. I was essentially a copy‑paster, but with better branding.

When complexity bites

A month into the experiment I tackled something more complex: multiple files, tricky state management, the kind of thing where small changes have big ripple effects.

Something broke—no clear error, just wrong behavior.

I spent three hours trying to fix it: pasting code into AI, getting suggestions that almost worked but not quite, watching the bug mutate into different bugs, feeling my confidence erode.

The truth about AI:

  • It’s great at writing code that works.
  • It’s okay at fixing code.
  • When you’re deep in the weeds of a complex system you don’t fully understand, AI can’t save you.

Because it doesn’t understand the context either. It doesn’t know the history of decisions you made. It can’t see the system as a whole—it just sees tokens and patterns.

That night, at 2 AM, staring at code I’d technically “written” but didn’t understand, I realized something:

I’d outsourced the thinking, not just the typing.

It’s not just about understanding your code (though that’s huge). It’s about growth.

I looked back at that month of AI‑assisted development and asked myself: “What did I learn?”

The honest answer? Not much.

  • I shipped more projects than usual, sure.
  • But I hadn’t become a better developer.
  • I hadn’t leveled up my problem‑solving.
  • I hadn’t really engaged with the interesting parts of the problems I was solving.

I’d become really good at… prompting, assembling pieces, shipping fast.

But the deep satisfaction that comes from genuinely solving a hard problem? That was gone.

And the scary part: I could feel myself getting dependent. I’d start a project and immediately think, “I’ll just have AI handle that part.” I was avoiding the hard stuff—the stuff that actually makes you grow.

What Works for Me

✅ Good uses of AI

AreaHow I use AI
BoilerplateProject setup, config files, repetitive patterns
Research“What’s the best way to parse JSON in Go?” or “How does this library work?”
UnblockingWhen I’m genuinely stuck, AI shows options I hadn’t considered
Code review“Find security issues in this” or “What would you improve?”
TestsGenerating unit tests, especially for edge cases I hadn’t thought of
DocumentationWriting READMEs, adding comments (but only after I understand what’s happening)

❌ Bad uses of AI

AreaWhy it’s risky
Complex business logicIf I don’t understand the core problem deeply, no amount of AI will save me
Security‑sensitive codeAI misses subtle issues; I need to defend every security decision
Performance‑critical pathsAI writes “okay” code, not fast code
Architecture decisionsAI doesn’t know my context, constraints, or trade‑offs
Production debuggingIf I don’t understand the system, I’m helpless when things break

The Rule I Live By

Never ship code you couldn’t debug yourself.

Before committing AI‑generated code I ask myself:

  1. Can I explain what this code does, line by line?
  2. Do I understand why it’s written this way and not some other way?
  3. If this breaks at 3 AM on a Sunday, could I fix it without AI’s help?

If the answer to any of these is “no,” I don’t ship it. Period.

Concrete Example

Recently I built a CLI tool that kills processes running on specific ports. Simple stuff. Here’s how AI helped:

# Prompt to AI
Write a Go CLI that:
- Accepts a port number as an argument
- Finds the PID(s) listening on that port
- Kills those processes (cross‑platform)
- Provides helpful error messages

The AI generated:

package main

import (
    "flag"
    "fmt"
    "os"
    "os/exec"
    "runtime"
    "strconv"
    "strings"
)

func main() {
    port := flag.Int("port", 0, "Port number to free")
    flag.Parse()

    if *port  65535 {
        fmt.Println("Please provide a valid port number (1‑65535).")
        os.Exit(1)
    }

    pids, err := findPIDs(*port)
    if err != nil {
        fmt.Printf("Error finding PID(s): %v\n", err)
        os.Exit(1)
    }

    if len(pids) == 0 {
        fmt.Printf("No process is listening on port %d.\n", *port)
        return
    }

    for _, pid := range pids {
        if err := killPID(pid); err != nil {
            fmt.Printf("Failed to kill PID %s: %v\n", pid, err)
        } else {
            fmt.Printf("Killed PID %s listening on port %d.\n", pid, *port)
        }
    }
}

// findPIDs returns a slice of PIDs (as strings) listening on the given port.
func findPIDs(port int) ([]string, error) {
    var cmd *exec.Cmd
    portStr := strconv.Itoa(port)

    switch runtime.GOOS {
    case "windows":
        // netstat -ano | findstr :
        cmd = exec.Command("netstat", "-ano")
    default:
        // lsof -i : -t
        cmd = exec.Command("lsof", "-i", ":"+portStr, "-t")
    }

    out, err := cmd.Output()
    if err != nil {
        // If the command returns a non‑zero exit code but still produces output,
        // we treat the output as valid (e.g., no matches on Windows returns exit code 1).
        if exitErr, ok := err.(*exec.ExitError); ok && len(exitErr.Stderr) == 0 {
            return []string{}, nil
        }
        return nil, err
    }

    // Split output by newlines and trim spaces.
    lines := strings.Split(strings.TrimSpace(string(out)), "\n")
    var pids []string
    for _, line := range lines {
        pid := strings.TrimSpace(line)
        if pid != "" {
            pids = append(pids, pid)
        }
    }
    return pids, nil
}

// killPID attempts to terminate the process with the given PID.
func killPID(pid string) error {
    var cmd *exec.Cmd
    switch runtime.GOOS {
    case "windows":
        cmd = exec.Command("taskkill", "/PID", pid, "/F")
    default:
        cmd = exec.Command("kill", "-9", pid)
    }
    return cmd.Run()
}

I reviewed every line, added a few comments, wrote a small test suite, and only then committed it. That’s the workflow I now follow.

TL;DR

  • AI can turbo‑charge boilerplate, research, unblocking, reviews, tests, and docs.
  • It should never replace your understanding of core logic, security, performance, architecture, or production debugging.
  • Adopt a personal rule: Never ship code you can’t debug yourself.

Use the assistant as a partner, not a substitute for thinking. 🚀

What I Learned from the Boilerplate

  • It showed me how to find processes by port (something I hadn’t done before).
  • It suggested cross‑platform considerations I hadn’t thought of.
  • It helped me write helpful error messages.

Here’s What I Did Myself

  1. Decided on the overall structure and flow.
  2. Wrote the core logic (finding the process, killing it safely).
  3. Tested edge cases manually to understand failure modes.
  4. Read every line of code before committing it.
  5. Prepared to explain the entire program if anyone asked.

Result: I built it in two days instead of five, and I actually understood what I built.

When a bug was reported on Windows (naturally), I knew exactly where to look—no guessing, no copy‑pasting errors into an AI hoping for the best.

Should You Use AI When You Code?

The better question is what’s your goal?

  • Ship a side project fast and don’t care about growth?
    Go for it. Let AI write everything, ship quickly, and learn what happens.

  • Build something serious—something that must last and might break in production at 3 AM?
    You need to understand what you’re building. Use AI as a tool, not a replacement for thinking.

  • Early in your career and trying to grow?
    Be careful. There’s no substitute for struggling through hard problems; that struggle is where learning happens.

  • Experienced and just want to move faster?
    Use AI intelligently. Let it handle the stuff below your skill level and focus on the work that’s at or above it.

My Current Balance (One Month Later)

  • AI writes about 50‑60 % of my code.
  • I read 100 % of it.
  • I understand 100 % of it.
  • I could debug 100 % of it if I had to.

The speed gains and momentum are still there, and the growth has returned: the satisfaction of solving hard problems and the confidence that comes from truly understanding what I’ve built.

The Takeaway

AI is an incredible tool—maybe the best thing to happen to developer productivity in my lifetime. Like any tool, it can amplify your abilities or become a crutch. The difference isn’t in the tool itself; it’s in how you use it.

  • Use AI. Embrace it to become faster, more productive, and less frustrated.
  • Never ship code you don’t understand. Your future self—debugging production issues at 2 AM—will thank you.

What’s Your Experience?

Have you found a balance that works for you? I’d love to hear your story. Share your thoughts on AI‑assisted coding in the comments below!

Back to Blog

Related posts

Read more »

Rapg: TUI-based Secret Manager

We've all been there. You join a new project, and the first thing you hear is: > 'Check the pinned message in Slack for the .env file.' Or you have several .env...

Technology is an Enabler, not a Saviour

Why clarity of thinking matters more than the tools you use Technology is often treated as a magic switch—flip it on, and everything improves. New software, pl...