I Built a Full-Stack F1 Fantasy Platform in 4 Weeks — Solo, With AI Agents

Published: (February 7, 2026 at 09:27 PM EST)
11 min read
Source: Dev.to

Source: Dev.to

Introduction

This post was originally published on binaryroute.com.

In late December 2025, during a short holiday break, I opened my laptop with a simple idea — build the F1 fantasy‑league platform I’d always wanted as a fan.

Four weeks later, Formula1.Plus was live:

  • 70 + database tables
  • 30 + API modules
  • Race predictions, live leaderboards, private leagues, telemetry dashboards, news aggregation, community features, and a full admin panel

All built and shipped by one developer.

This post is a technical walkthrough of how I pulled it off — the framework I built first, the stack I chose, the AI workflow that made it possible, and what I’d do differently.


The Problem of Solo Full‑Stack Development

If you’ve ever tried building a full‑stack product alone, you know the pain. You become:

  • Architect
  • Front‑end developer
  • Back‑end developer
  • DBA
  • DevOps engineer
  • QA tester

All at once.

Every context switch costs you time. You spend more time on plumbing than on the product. Projects stall, scope shrinks, or you burn out halfway through.

A project with this scope — predictions engine, scoring system, leaderboards, leagues, news aggregation with semantic search, telemetry dashboards, background‑job processing, passkey auth — would have been a 6‑month grind. Probably abandoned by month 3.

Two things changed the math:

  1. A framework I built to eliminate boilerplate.
  2. AI agents acting as pair programmers.

ProjectX – The Boilerplate‑Busting Framework

Before writing a single line of F1 code, I built ProjectX — an opinionated full‑stack TypeScript framework with a CLI.

Idea: Define your database schema once, run one command, and get everything generated.

projectx crud --models drivers,races,predictions

That single command generates:

  • API routes with Hono, including input validation via Zod
  • Service layer with business logic and authorization hooks
  • Repository layer with Drizzle ORM queries
  • TanStack Query hooks for the frontend with proper cache keys
  • Unit‑test scaffolds for the service layer

All type‑safe, all wired up with dependency injection. No manual API type definitions — the frontend imports route types from the backend via Hono’s typed client.

Architecture Diagram (simplified)

HTTP Layer (Hono Routes)

Middleware (Auth, Rate Limiting, Logging, Caching)

DI Container

Service Layer (Business Logic + Authorization)

Repository Layer (Drizzle ORM)

PostgreSQL

Every feature follows this pattern. Every feature gets its own folder. The AI agents I used later could navigate this structure instantly because it was consistent everywhere.

Repository Layout

f1plus/
├── apps/
│   ├── api/          # Hono backend
│   └── web/          # TanStack Start frontend
├── packages/
│   ├── db/           # Drizzle schemas + migrations
│   ├── db-sync/      # F1 data synchronization
│   ├── types/        # Shared TypeScript types
│   ├── ui/           # shadcn/ui component library
│   ├── emails/       # React Email templates
│   ├── env/          # Zod‑based env validation
│   └── tsconfig/     # Shared TS configs

Seven shared packages, two apps, one pnpm workspace. Everything shares types, nothing drifts.

This structure was the single most important decision. Not because it’s novel — because it gave me and the AI agents a predictable codebase from day one.

Stack Choices & Rationale

TanStack Start

  • SSR‑ready
  • File‑based routing via TanStack Router
  • TanStack Query baked in for data fetching

The routing is fully type‑safe — route params, search params, loaders — all typed. Combined with Hono’s typed client on the backend, I get end‑to‑end type safety from the database to the component without writing a single manual type.

// Frontend hook — generated by ProjectX CLI
export function useDriverStandings(seasonId: string) {
  return useQuery({
    queryKey: ['driver-standings', seasonId],
    queryFn: () => api.standings.drivers.$get({ query: { seasonId } }),
  })
}

The query function is typed from the Hono route definition. Change the API response shape and TypeScript catches it in the frontend immediately.

Tailwind v4 + CSS Custom Properties

I defined a set of design tokens:

--f1-bg-card;
--f1-bg-secondary;
--f1-border;
--f1-text;
--f1-text-muted;
--f1-red;

These resolve to different values in light and dark mode. Every component uses these tokens instead of hard‑coded colors, making the entire UI theme‑able with zero per‑component overrides.

shadcn/ui

Provides accessible, unstyled primitives that you own. No dependency lock‑in. I customized every component to match the F1 aesthetic.

Hono

A 15 KB web framework that runs everywhere — Node, Cloudflare Workers, Deno, Bun. Chosen for three reasons:

  1. Typed routeshono/client gives zero‑runtime‑overhead type inference.
  2. Middleware composition — Rate limiting, auth, logging, body limits, CORS — all composable.
  3. Performance — It’s fast; measurably fast.
// Rate limiting tiers
const rateLimits = {
  global:   { max: 200, window: '1m' },
  mutations:{ max: 30,  window: '1m' },
  expensive:{ max: 20, window: '1m' },
}

Drizzle ORM

The sweet spot between raw SQL and heavyweight ORMs. Type‑safe queries, zero runtime overhead, and the schema definitions are plain TypeScript.

export const drivers = pgTable('drivers', {
  id:          text('id').primaryKey(),
  name:        text('name').notNull(),
  abbreviation:varchar('abbreviation', { length: 3 }),
  nationality: text('nationality'),
  dateOfBirth: date('date_of_birth'),
  // …
})

pgvector + HuggingFace Transformers

Used for semantic search on news articles — embedding articles with Transformers and querying by similarity. This lets users search F1 news by meaning, not just keywords.

Background Job Workers

WorkerResponsibility
ScoringCalculates race scores and updates leaderboards
News SyncFetches and processes F1 news articles
EmailSends transactional emails via React Email + Resend
F1DB SyncAuto‑syncs historical data from F1DB, the community‑maintained open‑source F1 dataset on GitHub
TaskGeneral purpose async tasks

The AI Pair‑Programming Workflow

  1. Prompt Generation – I described the desired feature in natural language.
  2. Code Scaffold – The AI produced the skeleton (CLI command, file layout).
  3. Iterative Refinement – I asked for specifics (validation, auth hooks, tests).
  4. Review & Merge – I inspected the diff, ran the tests, and merged.

Because ProjectX enforced a consistent, predictable structure, the AI could reliably locate the correct files, import the right types, and respect existing conventions.

What I’d Do Differently

AreaOriginal ApproachRevised Approach
TestingGenerated unit‑test scaffolds, but wrote most tests manually later.Adopt property‑based testing (fast‑check) from day 0 for service layer.
CI/CDSimple GitHub Actions workflow.Add preview deployments per PR using Vercel/Cloudflare Pages to catch UI regressions early.
ObservabilityBasic console logs.Integrate OpenTelemetry + Loki/Grafana for distributed tracing of API calls and background jobs.
Feature FlagsHard‑coded toggles.Use LaunchDarkly‑style flag library to enable gradual rollouts of new scoring algorithms.
DocumentationREADME + inline comments.Generate API docs automatically from Hono route types using typedoc + swagger-ui.

Closing Thoughts

Building a production‑grade, full‑stack fantasy‑league platform alone is daunting, but with:

  • An opinionated, code‑generating framework (ProjectX) that removes boilerplate,
  • AI agents that act as pair programmers, and
  • A well‑chosen, type‑safe stack (TanStack Start, Hono, Drizzle, shadcn/ui),

the effort shrinks from months to weeks.

If you’re embarking on a solo full‑stack adventure, start by standardising your architecture and leveraging AI for repetitive scaffolding. The rest will follow much more smoothly.

Happy coding!

Generic Async Tasks

Bull Board provides an admin dashboard for monitoring all queues.

BetterAuth handles OAuth (Google, Discord, X) and Passkey/WebAuthn support.
Passkeys are the future of auth — no passwords, phishing‑resistant, biometric.
It took one afternoon to set up.

The Part That Made 4 Weeks Possible

I used Claude Code, OpenAI Codex, and Gemini throughout the entire build — not as autocomplete, but as collaborators that could hold the full codebase in context.

  • Claude Code was the core developer — writing, refactoring, and debugging code directly in the repository.
  • Claude, Codex, and Gemini served as architects: every non‑trivial feature went through an extensive design process where I gathered perspectives from multiple models before settling on a final implementation plan. Different models catch different edge cases, and the overlap builds confidence.

Feature Scaffolding

I’d describe a feature (e.g., “add a predictions system where users pick drivers for each race, with a lock‑of‑the‑week mechanic for bonus points”), and the agent would generate:

  • the schema
  • the service
  • the routes
  • the validation
  • the frontend hooks

Not perfect on the first pass, but ~80 % there. I’d review, adjust, and iterate.

Parallel Code Reviews

When I needed to migrate 50+ component files from hard‑coded bg-white/10 opacity patterns to CSS custom properties for light‑mode support, I launched three AI agents in parallel — each handling a batch of files, following the same mapping guide. What would have been a full day of tedious find‑and‑replace was done in minutes with consistent results.

Debugging

Example: the track‑circuit SVG component had a blur effect from 8 stacked CSS drop-shadow filters that were compounding exponentially. The agent identified the root cause (each filter applies to the cumulative result of all previous filters), removed the outline system, and adjusted stroke widths — a fix I might have spent an hour on.

Consistency

As the codebase grew past 50+ components, the AI kept patterns consistent:

  • same naming conventions
  • same file structure
  • same validation approach

This is where solo developers usually start cutting corners.

Key insight: AI agents are only as good as the patterns you give them.

ProjectX’s opinionated architecture gave the AI guardrails:

  • Services live in features/<name>/<name>.service.ts
  • Validation uses Zod schemas in features/<name>/validations.ts
  • Routes are thin – they delegate to services
  • Front‑end hooks follow TanStack Query conventions

Without that consistency, you’re just generating spaghetti faster. The framework wasn’t just for me — it was for the AI too.

Deployment Story

TanStack Start builds to Cloudflare Workers via the Vite plugin. The frontend runs at the edge, globally distributed, with near‑zero cold starts.

  • Cost: practically free at this scale; Cloudflare’s free tier is generous.

Railway runs the Hono API, PostgreSQL, and Redis. Docker multi‑stage build keeps the image lean. Health checks, auto‑restarts, and deploy previews are built‑in.

railway.toml

[deploy]
healthcheckPath = "/health/ready"
healthcheckTimeout = 30
restartPolicyType = "on_failure"
restartPolicyMaxRetries = 5

Two Workflows

  1. CI – lint, type‑check, build, test on every PR
  2. Deploy – manual trigger, runs migrations first, then deploys API to Railway and frontend to Cloudflare
Push to main → CI passes → Trigger deploy →
  Run migrations → Deploy API (Railway) → Deploy Web (Cloudflare)

The entire production infrastructure — API server, PostgreSQL, Redis, edge‑deployed frontend, CI/CD — costs approximately $20 / month (or ~$10 / month with serverless database and Redis options). No Kubernetes. No Terraform. No DevOps engineer.

What Shipped in 4 Weeks

Predictions & Scoring

  • Race predictions with driver and constructor picks
  • Lock‑of‑the‑week mechanic for bold predictions (bonus points)
  • Last‑place picks, constructor top‑3, bonus picks
  • Automated scoring engine via background workers

Leaderboards & Leagues

  • Live leaderboards with all‑time and per‑championship rankings
  • Private leagues with invite codes
  • League‑specific leaderboards and standings

Telemetry & Data

  • Driver DNA breakdowns and performance analysis
  • Historical data auto‑synced from F1DB (community‑maintained open‑source dataset)
  • Circuit profiles with past results and statistics
  • Recharts‑powered visualizations

Community

  • Grand Stand — polls, discussions, community engagement
  • News aggregation with semantic search (pgvector)
  • Activity feeds and social features

Admin

  • Event management and prediction configuration
  • Queue monitoring via Bull Board
  • Audit logs, contact management, poll templates

Auth & Infrastructure

  • OAuth (Google, Discord, X) + Passkey/WebAuthn
  • Rate limiting (tiered: global, mutations, expensive queries)
  • OpenTelemetry for distributed tracing
  • Structured logging with Pino

The F1 season is approaching, and I’m getting early users in before Round 1.

ProjectX – Going Open Source

ProjectX – the framework that made this possible – will be open‑sourced soon. It is a single‑monorepo CLI that scaffolds web, mobile, and browser extensions with the full architecture out of the box.

Features include:

  • Plugin system
0 views
Back to Blog

Related posts

Read more »

Idempotent APIs in Node.js with Redis

!Cover image for Idempotent APIs in Node.js with Redishttps://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2...