šŸ¦‰ From Broken Models to Living Systems: My Journey Building AI Without a GPU

Published: (December 25, 2025 at 11:56 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

Introduction

A brief look at the journey, from early missteps to ongoing experiments, and the lessons learned along the way.

Project #1: Lynqbit — My Favorite Failure

Lynqbit was my first real love: a 90 M‑parameter model that was ambitious, poetic, and a little weird.

  • Failure points
    • System configuration issues
    • No proper training infrastructure
    • No GPU to sustain iteration

Two months of intense work vanished, and the project collapsed. It hurt, but it taught me that failure is a harsh but clear teacher.

Insight #1: Training Should Flow, Not Break

Lynqbit’s death sparked a question:

What if training didn’t depend on one fragile system?
What if data and learning could stream?

That idea guided my next experiments.

Project #2: Barn Owl AI — Short Life, Big Lesson

Barn Owl AI explored streamed training:

  • Concept: cloud‑hosted dataset, sampling‑based training, continuous learning.
  • Reality: the cloud dataset shut down after a few days, bugs remained unfixed, and the project failed.

Lesson learned: the loss was small, but the insight was huge.

Project #3: Elf Owl AI — My First Real Win

Elf Owl AI was a small, chaotic, ā€œaliveā€ model:

  • 25 M parameters
  • Creative, hallucinatory, with optional grammar and a moody personality

Successes

  • Fully trained and open‑sourced
  • Publicly released (imperfect, but it existed)

Existence matters.

Project #4: Xenoglaux AI (Xeno AI) — The Ongoing Battle

I’m now building Xenoglaux AI, named after a real owl species and scaled by size and intelligence.

  • GitHub:
  • Dataset: 75 000+ hand‑crafted + open‑source entries, designed for streamed training
  • Modular evolution: Part 2 of the Owl Series

Training bottleneck

  • ~15 h on a GPU (acceptable)
  • Way too slow on CPU
  • Online TPUs barely cooperate

The hardware limitation, not the model or data, is the current obstacle.

Side Quest: A Game That Learns You

While struggling with Xeno, I built a game with an AI opponent that learns from the player:

  1. Match 1 – AI starts as a literal block.
  2. Data collection – Player moves, positions, and decisions are stored as JSON.
  3. Retraining loop – After each match, the AI loads the last checkpoint, retrains on the new data, and repeats.

Results (private testing)

  • 20–30 matches → decent player
  • 400–500 matches → unbeatable

This is ā€œearned intelligence,ā€ not scripted behavior.

What I’ve Realized So Far

  • Failure isn’t wasted work; it’s compressed knowledge.
  • Small models can still feel alive.
  • Streaming + incremental learning is underrated.
  • Hardware limits creativity more than ideas do.

If you’re building with limited resources, you’re not alone.

Next Steps (Real Talk)

  1. Rename Strategy for Xeno

    • Keep ā€œXenoglaux AIā€ as the series name.
    • Use model‑specific tags like Xeno-25M, Xeno-40M, Xeno-Lite to avoid confusion.
  2. Stop Full Retraining — Go Incremental

    • Train on small chunks (2 k–5 k samples).
    • Save checkpoints aggressively.
    • Resume training daily instead of 15‑hour marathons.
    • Think ā€œdrip learning,ā€ not floods.
  3. Exploit What You Have (CPU + Time)

    • Use lower precision (fp16/int8 if possible).
    • Fewer epochs, more iterations.
    • Smaller batch sizes + gradient accumulation.
    • Slow ≠ impossible; it just requires discipline.
  4. Publish the Game AI Idea

    • Online learning, self‑adapting opponent, personalized difficulty curve.
    • Worth a standalone post on Dev.to.

I’m 15, with no GPU, lab, or funding—just an overheating laptop, relentless ideas, and projects that fail loudly. What I’ve learned isn’t how to train an AI; it’s how to stay standing when a favorite project dies. Failure is a redirect, not a stop sign. Small models can feel alive, unfinished work still counts, and every limitation forces creativity.

If a 15‑year‑old with no GPU can keep building, failing, and learning, then perhaps the real system we’re training isn’t the AI… it’s ourselves. šŸ¦‰āœØ

Back to Blog

Related posts

Read more Ā»

2025: The Year in LLMs

Article URL: https://simonwillison.net/2025/Dec/31/the-year-in-llms/ Comments URL: https://news.ycombinator.com/item?id=46449643 Points: 56 Comments: 21...