Building an Autonomous Template Business, kind of.
Source: Dev.to
One of the side projects I’ve been working on recently was a multi‑agent system to automate the creation of ready‑to‑use, feature‑complete templates for sale on Gumroad. It began as an academic exercise to explore agents, multi‑agent orchestration, and LLM API integration. I’ve since pivoted to a more hands‑on approach, but here are a few learnings from the short attempt at building a side‑project for hybrid income.
The code is available here for anyone who wishes to check it out and maybe build upon it.
Lesson 1 – Spend time on your prompts and configuration documents
Whether it’s the initial prompt you use to generate tasks and ideas, or the various configuration documents like AGENTS.md, it’s critical to spend time upfront tailoring them to your liking. I didn’t invest enough time initially and only later saw the impact of that oversight. Apart from any guardrails provided by your AI tool or built into your system, prompts and configurations are the only points where you can heavily influence the direction the agents take. Be explicit about what you do and what you don’t want—the latter is equally, if not more, important.
Lesson 2 – Context management is a pain
This is probably obvious to many who have used AI tools for a while, and there are numerous ways to mitigate it. It becomes especially tricky in a multi‑agent scenario where linear continuity is expected from one hand‑off to another. Agents often need to share enough context to understand the current question or task, but keeping 100 % of the context isn’t always feasible or necessary. Compacting context is usually sufficient for acceptable quality, but when a hand‑off must amplify context rather than compress it, a naive approach can be detrimental.
I first noticed this when I had to introduce chunking for Anthropic API calls because of hard token limits. Starting with naïve chunking led to duplicate components and a messy integration. As I moved to more context‑aware chunking strategies, the output quality improved, consistency across files increased, and the amount of manual QA I needed to perform dropped.
I haven’t perfected this yet, but I’m actively exploring more robust review and intervention processes for my next pivot.
Lesson 3 – Be very, very explicit
There are many schools of thought here; some people are happy to delegate decision‑making to the agents. I prefer to retain more control. Agentic workflows can speed up the path from idea to solution, but a human operator must still make the final decisions.
I made the mistake of being vague in requests like “follow the latest engineering trends” or “stay up to date with 2026 versions.” The agent interpreted that as “guess what will trend in 2026 and use it,” which resulted in experimental, canary‑only features being included in what was supposed to be a production‑ready template. Being a subject‑matter expert helps the operator ask the right questions and define appropriate guardrails. I learned this the hard way and will apply it to future projects.
Lesson 4 – Pick a tool that works for you
Given the rapid development of AI‑assisted tools, it’s easy to jump between them. I’ve tried Cursor, Windsurf, VS Code with Copilot, JetBrains with Junie, and many others. Some are better for managing existing production or legacy codebases, while others excel at creating something from scratch. Antigravity, an opinionated IDE, works well for me when kick‑starting new projects.
My advice: ignore the hype, experiment with the options available, and craft a workflow that fits your style. Since the operator’s primary role is to ask the right questions, minimizing distractions is key.
If this was interesting, you can check out my site at ssong.dev or follow me on GitHub.