Part 1: The Default Way - Putting an App on a Server (And Why It Breaks)

Published: (December 22, 2025 at 06:09 PM EST)
5 min read
Source: Dev.to

Source: Dev.to

Series: From “Just Put It on a Server” to Production DevOps
Reading time: 12 minutes
Level: Beginner‑friendly

Introduction

Every developer has a moment when they think:

“I’ve built this amazing app locally. Now I just need to put it on a server somewhere.”

Sounds simple, right?

You rent a server, upload your code, run npm start, and boom—you’re live.

This article is about what happens next.

We’ll deploy a real application—the Sales Signal Processing Platform (SSPP)—the old‑school way. No Docker. No Kubernetes. No CI/CD magic. Just you, a Linux server, and SSH.

By the end you’ll understand why modern DevOps tools exist—not because someone said they’re “best practices,” but because you’ll feel the pain they solve.

What We’re Building

The Sales Signal Processing Platform is an event‑driven system that:

  • Accepts events via a REST API (email sent, page viewed, meeting booked)
  • Queues them in Redis for async processing
  • Calculates signal scores (how “hot” a sales lead is)
  • Stores data in PostgreSQL
  • Indexes in Elasticsearch for search/analytics

Why this project?

It’s not a toy. It’s a real‑world SaaS pattern used by companies like Segment, Mixpanel, and sales‑intelligence platforms.

Architecture (Simple Version)

flowchart LR
    A[Client (HTTP POST)] --> B[API Service (NestJS)]
    B --> C[Redis Queue]
    C --> D[Worker (Node.js)]
    D --> E[PostgreSQL]
    D --> F[Redis]
    D --> G[Elasticsearch]

If you prefer a plain‑text diagram:

┌─────────────┐      ┌─────────────┐      ┌─────────────┐
│   Client    │─────▶│  API Service│─────▶│  Redis Queue│
│ (HTTP POST) │      │  (NestJS)   │      │             │
└─────────────┘      └─────────────┘      └──────┬──────┘


                                       ┌─────────────┐
                                       │   Worker    │
                                       │ (Node.js)   │
                                       └──────┬──────┘

                ┌─────────────────────────────┼─────────────────────────────┐
                ▼                             ▼                             ▼
          ┌──────────┐                 ┌──────────┐                 ┌─────────────┐
          │PostgreSQL│                 │  Redis   │                 │Elasticsearch│
          └──────────┘                 └──────────┘                 └─────────────┘

Tech Stack

ComponentTechnology
APINestJS (TypeScript)
WorkerNode.js (TypeScript)
DBPostgreSQL 15
QueueRedis 7
SearchElasticsearch 8

Step 1: Rent a Server (The Easy Part)

We’re using Linode (now Akamai Cloud) because:

  • Simple pricing ($5‑$10 / month gets you started)
  • No vendor lock‑in (real VMs, not proprietary services)
  • Shows you understand infrastructure fundamentals

Create a Linode Instance

  1. Sign up at https://www.linode.com/.
  2. Click Create → Linode and configure the instance:
SettingValue
DistributionUbuntu 22.04 LTS
PlanShared CPU – Nanode 1 GB ($5/mo)
RegionChoose the one closest to you
Labelsspp-server-01
  1. Set a root password (store it securely).
  2. Boot the instance.

What you now have

  • A real computer running Linux in a datacenter.
  • A public IP address (e.g., 45.79.123.45).
  • SSH access on port 22.

Step 2: SSH into Your Server

ssh root@45.79.123.45

The first time you connect you’ll see:

The authenticity of host '45.79.123.45' can't be established.
ED25519 key fingerprint is SHA256:...
Are you sure you want to continue connecting (yes/no)?

Type yes. This stores the server’s fingerprint so you can verify it on subsequent connections.

You are now controlling a computer 500+ miles away.

pwd
# /root

You’re in the home directory of the root user—the superuser who can do anything.

Step 3 – Install Dependencies (Manual Method)

The application requires the following components:

ComponentPurpose
Node.jsJavaScript/TypeScript runtime
PostgreSQLRelational database
RedisQueue & cache
ElasticsearchSearch & analytics engine

1. Install Node.js (v18)

# Update the package index
sudo apt update

# Add the NodeSource repository for Node.js 18
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo bash -

# Install Node.js and npm
sudo apt install -y nodejs

# Verify the installation
node --version    # e.g. v18.20.0
npm --version     # e.g. 9.8.1

2. Install PostgreSQL

# Install PostgreSQL and the contrib package
sudo apt install -y postgresql postgresql-contrib

# Start the service and enable it at boot
sudo systemctl start postgresql
sudo systemctl enable postgresql

Create a database and user

# Switch to the postgres OS user
sudo -u postgres psql <<'SQL'
-- Create a new role (replace <user> and <password> with your values)
CREATE ROLE myapp_user WITH LOGIN PASSWORD 'strong_password';

-- Create a new database owned by the role
CREATE DATABASE myapp_db OWNER myapp_user;

-- Grant all privileges on the database to the role
GRANT ALL PRIVILEGES ON DATABASE myapp_db TO myapp_user;
SQL

3. Install Redis

# Install Redis server
sudo apt install -y redis-server

# Start the service and enable it at boot
sudo systemctl start redis-server
sudo systemctl enable redis-server

# Verify that Redis is running
redis-cli ping   # should return PONG

4. Install Elasticsearch (v8)

# Import the Elasticsearch GPG key
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

# Add the Elasticsearch repository
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | \
    sudo tee /etc/apt/sources.list.d/elastic-8.x.list

# Update the package index and install Elasticsearch
sudo apt update
sudo apt install -y elasticsearch

# Start the service and enable it at boot
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch

# Wait a few seconds for the node to start, then test the endpoint
sleep 30
curl -s http://localhost:9200 | jq .

Tip: If jq is not installed, you can view the raw JSON with curl -s http://localhost:9200.

5. Quick Recap

ComponentInstall Command(s)Service Management
Node.jsapt install -y nodejsN/A
PostgreSQLapt install -y postgresql postgresql-contribsystemctl start/enable postgresql
Redisapt install -y redis-serversystemctl start/enable redis-server
Elasticsearchapt install -y elasticsearchsystemctl start/enable elasticsearch

Estimated time: 15–20 minutes.

Next steps:

  1. Cloning the codebase
  2. Configuring environment variables
  3. Running the API server
  4. Starting the background worker

Step 4: Deploy Your Application

Now let’s get our code onto the server.

Clone the Repository

cd /opt
git clone https://github.com/daviesbrown/sspp.git
cd sspp

Install API Dependencies

cd /opt/sspp/services/api
npm install

This takes 2‑5 minutes. Watch the terminal spam scroll by.

Build the API

npm run build

TypeScript compiles to JavaScript in the dist/ folder.

Create Environment Variables

Create a .env file in the api directory with the required variables. Below is a template you can copy and edit to suit your environment:

# .env – API configuration
PORT=3000
DB_HOST=localhost
DB_PORT=5432
DB_USER=your_db_user
DB_PASSWORD=your_db_password
DB_NAME=your_db_name
JWT_SECRET=your_jwt_secret
# Add any additional variables required by the application

Note: In the future we’ll replace this setup with Docker and Kubernetes, but for now we need the traditional approach to understand how the application runs without containers.

Try It Yourself

Don’t just read—do it:

  1. Spin up a server – Linode, DigitalOcean Droplet, AWS EC2, etc.
  2. Follow this guide exactly and get the app running.

Test Its Resilience

Try to kill the process in creative ways:

kill -9 <pid>
  • Disconnect the SSH session.
  • Reboot the server.

Observe how often the service stays dead.

Bonus challenge: Deploy the Worker service as well (hint: it’s a background service with no HTTP endpoint).

Discussion

What have your “just put it on a server” horror stories been?

Drop a comment or open an issue on GitHub.

Next: Part 2: Process Managers – Keeping Your App Alive with PM2

About the Author

I’m building this series to show real‑world DevOps thinking for my Proton.ai application. If you’re hiring for DevOps/Platform roles and want someone who understands infrastructure (not just follows tutorials), let’s talk.

Back to Blog

Related posts

Read more »

Kubernetes Journey Part 1: Why Docker?

Welcome to the first post on learning Kubernetes! Before we dive into the complexities, we have to talk about the building block that made it all possible: Dock...

Linux Foundations for DevOps – Epic

Overview This epic focuses on building the Linux fundamentals required for DevOps work. It is an operational tool, not an academic subject. Goals By the end of...

Friday Five — December 19, 2025

Red Hat Accelerates AI Trust and Security with Chatterbox Labs Acquisition Red Hat has acquired Chatterbox Labs, a specialist in AI safety and generative AI gu...