Tech Stack Lessons from scaling 20x in a year

Published: (January 9, 2026 at 07:13 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Cover image for Tech Stack Lessons from scaling 20x in a year

A year ago, I wrote about our tech stack and how it helped us run a lean cloud‑computing startup. Since then, we’ve scaled over 20×. That kind of growth is fun, but it also breaks a lot of things and assumptions, forcing you to make hard choices quickly :‑D

Here’s what changed, what stayed the same, and what we learned along the way.

Tech Stack

What Stayed the Same

Some things just work.

  • Frontend – still Nuxt with TypeScript and Tailwind (RIP @adamwathan).
  • Backend – still Go with Gin.
  • Infrastructure – still on Hetzner bare‑metal, using Firecracker for virtualization.
  • IaC – still Terraform.
  • Caching – still Redis.
  • Customer support – still Crisp.
  • Transactional email – still AWS SES.

If it ain’t broke, don’t fix it.

But plenty did break — or became too expensive to keep running the same way.

Observability: Axiom → Parseable

This was our biggest operational change. Last year I praised Axiom for logs; it was great on the free tier—until we scaled.

Axiom bill exploding

As traffic grew, we needed better tracing and more detailed logs. Our Axiom bill exploded past €1,000 / month and kept climbing. At that point you have to ask: is this sustainable? Obviously not.

We migrated to Parseable, self‑hosted on Kubernetes with MinIO for S3‑compatible storage, all running on bare‑metal. The product still feels early, but the team is responsive and ships fixes fast when something breaks. Big shout‑out to Anant and Deba!

Would I recommend it? If you can’t trade boatloads of money for time, yes. Self‑hosting observability is work, but at our “scale” (we’re still tiny) it’s worth it. We still use Grafana for dashboards and alerts; that hasn’t changed (for now, the bill is starting to hurt).

Object Storage: Backblaze → IONOS / Hetzner

Last year we used Backblaze for blob storage. It was cheap and reliable. The problem wasn’t technical; it was political.

As we grew, enterprise customers—especially European ones—started pushing back on storing their data with US providers (GDPR, data‑sovereignty, internal policies). The message was clear: No US providers! So our crusade to replace all US providers began with Backblaze.

We moved to IONOS and Hetzner for object storage. Are they as good as Backblaze? No, not even close. But they’re European, they’re (barely) good enough, and they satisfy our customers’ requirements. Honestly, if you’re not required to use them, I wouldn’t. It feels like we don’t really have a choice here.

CDN: Cloudflare → Bunny

Same story as storage. Cloudflare is an incredible product with features we’ll never use, but customers asked for a European alternative.

Bunny fits the bill. It isn’t feature‑complete like Cloudflare, but it handles our CDN needs perfectly. It’s fast, reasonably priced, and European. For our super‑simple setup the migration took less than 2 hours.

CI/CD: GitHub Actions → Namespace

GitHub Actions served us well, but it stagnated. We needed nested virtualization for testing Firecracker stuff and better performance—GitHub wasn’t delivering.

We moved to Namespace for our runners. It’s a great product—also European, which is becoming a theme here. The performance improvements alone were worth the switch.

That said, we’ll probably migrate to completely self‑hosted runners eventually. The more we scale, the more control we want.

Data Persistence: The Big One

This was our most significant architectural change. Last year, I bragged about running everything in PostgreSQL with Timescale, including hundreds of millions of analytics rows. That worked great until our database hit 2 TB.

At 2 TB, PostgreSQL becomes hard to manage. Stupid queries can take down prod, scaling is painful, and database pros start to laugh. (The rest of the story continues in the next part of the article…)

# About Me

2 TB is probably nothing in the grand scheme of things! I am not a Postgres pro, and honestly wasn’t planning on becoming one. Additionally, the cost just started to hurt—especially considering that we want to do another 20× in 2026.

So we built something simpler: hot data lives in **Postgres**, then gets flushed to S3 as **Parquet** files. For queries we use **DuckDB** to read directly from S3. DuckDB is **amazing**.

The results surprised us. P99 latency actually improved. Why? Most queries are “give me the last 5 minutes of metrics” or “show me the last 500 logs.” That’s all hot data sitting in Postgres. Historical queries hit S3, and DuckDB handles Parquet files like a champ. Those are, if not cached, slightly slower.

This architecture saves money, scales better, and plays to our strengths. We understand S3. We don’t understand running a 10 TB Postgres cluster. :D

The Pattern

Looking back at all these changes, there’s a clear pattern:

  • European everything.
    Customer pressure pushed us toward EU providers. This isn’t a technical decision; it’s a business reality when you grow beyond startups and indie hackers.

  • Self‑host at scale.
    SaaS products are great until your bill crosses a threshold. Then you have to do the math on whether your time is cheaper than their prices.

  • Simple beats clever.
    We didn’t build a fancy distributed database. We flush data to S3 and query it with DuckDB. It’s not sexy, but it works! (Actually, I think the simplicity is quite sexy, but not great for resume‑driven development.)

What’s Next

  • We’ll probably self‑host our CI runners soon.
  • We’re evaluating alternatives to AWS SES since, you know, European compliance.

The stack will keep evolving—that’s the nature of building infrastructure at scale. But the core philosophy stays the same: keep it simple, keep it maintainable, and only add complexity when the problem forces you to.

That’s where we’re at in 2026: twenty times bigger, a few hard lessons learned, and a stack that’s more European than ever.

Cheers,
Jonas

Back to Blog

Related posts

Read more »

Hello, Newbie Here.

Hi! I'm falling back into the realm of S.T.E.M. I enjoy learning about energy systems, science, technology, engineering, and math as well. One of the projects I...