Why I Don’t Want Docker to Be the Default Deploy Path

Published: (May 2, 2026 at 08:19 PM EDT)
5 min read
Source: Dev.to

Source: Dev.to

Docker is good software. I want to say that up front because the internet has a special talent for turning every tooling opinion into a cage match.

I use Docker. I like Docker for databases, repeatable CI jobs, weird dependency stacks, internal services, and anything where I need a clean system image that behaves the same everywhere.
But I do not want Docker to be the default deploy path for every web app.

Sometimes I just want to put a small app on a VPS and have it run. That should feel boring.

The default path got heavier

A lot of modern deploy tutorials quietly turn this:

# Traditional steps
build app
copy files to server
start app
route traffic

into this:

# Docker‑centric steps
write a Dockerfile
pick a base image
handle build layers
create a registry
push an image
pull it on the server
wire up compose
configure networking
mount secrets
debug why the container exits

None of those steps are evil—they’re just a lot. For many apps, they are not the interesting part.

If I am deploying a side project, a small SaaS, a webhook handler, a dashboard, or a little internal tool, the app usually needs a few simple things:

  • build the code
  • start the process
  • serve HTTPS
  • restart when it crashes
  • keep secrets out of git
  • show logs when something breaks
  • maybe run a few apps on the same machine

That list does not automatically mean “containerize everything”.

Containers solve real problems

This is not an anti‑Docker post. Docker solves problems that are absolutely real:

  • repeatable runtime
  • less mysterious system packages
  • service isolation
  • easier CI
  • a common artifact to pass around

Those are useful, but defaults matter. When Docker becomes the first step for every deploy, even tiny apps inherit container concerns before they have container problems. Developers then start thinking about image size, build cache, multi‑stage builds, registry auth, container networking, volume paths, base‑image updates, and whether the process can find the right port inside the container. All valid, just not always the first thing to address.

A VPS can run normal processes

A VPS is already a computer. It can run a process. Modern deployment advice often treats a server as useful only once it runs a container scheduler, but for many apps a direct process model is enough:

# Example process commands
bun run start
node server.js
./my-go-app
./target/release/my-rust-app

The hard parts are usually not “can Linux run this binary?” but everything around it:

  • how does traffic reach it?
  • how does HTTPS work?
  • how do I deploy a new version without downtime?
  • where do logs go?
  • how do secrets get injected?
  • how do I restart it?
  • how do I run multiple apps on one box?

These are deployment problems, not necessarily Docker problems.

I want the PaaS feeling without giving up the server

I like the feel of a PaaS:

deploy

and then the app is live. I also like owning a small VPS—it’s cheap, flexible, and “boringly” reliable. I know where the app is running, I can SSH in, and I can inspect the machine. I’m not turning every weekend project into a cloud‑architecture diagram.

The ideal flow for me looks more like this:

tako deploy
  • Local machine builds the app.
  • The deploy tool copies the release to the server.
  • The server runs the app as a normal process.
  • A proxy routes requests to healthy instances.
  • HTTPS is handled.
  • Logs are available.
  • Secrets are managed outside random .env files.

No image registry needed. No Dockerfile unless I actually want one. No container‑networking puzzle for a two‑route web app.

That is the direction I have been exploring with Tako, a small deployment tool for running apps on your own servers.

The boring path should be the happy path

There is a version of deployment that feels almost disappointingly plain:

tako init
tako servers add
tako deploy

That is the kind of boring I want—boring as in:

  • fewer concepts before the first deploy
  • fewer files created only for infrastructure
  • fewer moving parts for small apps
  • fewer places where a simple mistake hides
  • fewer “wait, is this a Docker problem or an app problem?” moments

The default path should optimize for getting the app online first. If the app later grows into container needs, reach for containers.

Docker should be an option, not the entrance fee

The web has a habit of turning powerful tools into mandatory tools. Docker is powerful and deserves its place, but I don’t think every deploy should start by asking the developer to write a container recipe.

For many projects, the best deploy path is still:

  1. build the app
  2. put it on a server
  3. run it
  4. route traffic to it
  5. make updates boring

That is not old‑fashioned; it’s just a good abstraction. The default deploy path should feel calm, like the server is helping you run your app—not demanding you become a platform engineer before lunch.

0 views
Back to Blog

Related posts

Read more »