End-to-End Deployment of a Two-Tier Application Using Docker, Kubernetes, Helm, and AWS
Source: Dev.to
Table of Contents
- What Is a Two‑Tier Application?
- Deployment Approach
- Preparing the EC2 Instance
- Installing Docker on Ubuntu
- Creating the Dockerfile
- Building & Running the Containers
- Docker Networking
- Pushing the Image to Docker Hub
- Docker Compose
- Accessing the Application
- Key Messages
- Wrapping Up
- Single Command & Next Steps
What Is a Two‑Tier Application?
A Two‑Tier Application consists of:
| Tier | Description |
|---|---|
| Application Tier (Backend) | • Flask‑based service • Handles business logic, API requests, and communication with the database |
| Database Tier | • MySQL database • Stores and manages application data, processes queries from the backend |
The backend defines how the app behaves; the database ensures data persistence and consistency.
Deployment Approach
To keep things simple and structured, the deployment is performed in two stages:
- Dockerfile (Single‑Container Focus) – Containerize the Flask backend.
- Docker Compose (Multi‑Container Setup) – Run the Flask backend and the MySQL database together.
This mirrors a real‑world development workflow and serves as a foundation for later migration to Kubernetes.
Preparing the EC2 Instance
- Launch an Ubuntu EC2 instance (e.g.,
t2.micro) and name it2-tier-App-DEPLOYMENT. - Attach a private key for SSH access.
- Use the default security‑group settings for now; we’ll adjust the inbound rules later (allow ports
22and5000).
# Example SSH command
ssh -i /path/to/your-key.pem ubuntu@<public‑ip>
Installing Docker on Ubuntu
Run the following commands on the instance:
# Update package index
sudo apt-get update
# Install prerequisite packages
sudo apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up the stable repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
# Verify installation
docker --version
Tip: If you encounter
Permission denied while trying to connect to the docker daemon socket, add your user to thedockergroup:
sudo usermod -aG docker $USER
newgrp docker # refresh group membership in the current session
Creating the Dockerfile
- Clone the repository:
git clone https://github.com/SAFI-ULLAHSAFEER/two-tier-flask-app.git
cd two-tier-flask-app
- Remove any existing
Dockerfileand create a fresh one:
rm -f Dockerfile
cat > Dockerfile :5000`.
(The original snippet ends abruptly; retain it as‑is.)
Building & Running the Containers
(Section placeholder – original content did not include explicit steps beyond the Dockerfile creation.)
Docker Networking
Instead of using --link, create an isolated network:
docker network create twotier
docker network connect twotier mysql-db
docker network connect twotier flask-app
You can verify the network membership:
docker network inspect twotier
Pushing the Image to Docker Hub
# Log in (you’ll be prompted for Docker Hub credentials)
docker login
# Tag the image for your repository (replace <your‑repo> with your Docker Hub repo)
docker tag two-tier-flask-app:latest <your‑repo>/two-tier-flask-app:latest
# Push the image
docker push <your‑repo>/two-tier-flask-app:latest
The image is now publicly available and can be pulled on any host.
Docker Compose
Why Docker Compose?
Docker Compose lets you define both containers (Flask & MySQL) in a single YAML file and start them with one command.
Installing Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/v2.27.0/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
docker-compose.yml Explained
Create docker-compose.yml in the project root:
version: "3.9"
services:
db:
image: mysql:8.0
container_name: mysql-db
environment:
MYSQL_ROOT_PASSWORD: RootPass123
MYSQL_DATABASE: two_tier
MYSQL_USER: appuser
MYSQL_PASSWORD: AppPass123
volumes:
- mysql-data:/var/lib/mysql
- ./message.sql:/docker-entrypoint-initdb.d/message.sql # optional seed data
ports:
- "3306:3306"
web:
build: .
container_name: flask-app
depends_on:
- db
environment:
DB_HOST: db
DB_USER: appuser
DB_PASSWORD: AppPass123
DB_NAME: two_tier
ports:
- "5000:5000"
volumes:
mysql-data:
Key points
| Directive | Reason |
|---|---|
depends_on | Guarantees the MySQL container starts before Flask, avoiding connection errors. |
volumes (named) | Persists MySQL data across container restarts. |
./message.sql | Optional script to pre‑populate the database on first start. |
build: . | Uses the Dockerfile in the current directory to build the Flask image. |
Tip: Use an online YAML formatter or
yamllintto ensure proper indentation.
Running the Application with Compose
# Stop any stray containers from previous steps
docker compose down
# Build images and start services
docker compose up -d
Check the status:
docker compose ps
All containers are now running, connected to the same network, and the Flask app is reachable at http://<host‑ip>:5000.
Accessing the Application
- Security Group – Ensure inbound rules allow TCP traffic on port 5000 (and 22 for SSH).
- Open a browser and navigate to:
http://<host‑ip>:5000
You should see the Flask application’s default page or API response.
Key Messages
AWS Cloud Club MUST
AWS Student Community Day Mirpur 2025
These strings are placeholders for any branding or event‑specific messages you may want to display within the application or documentation.
Wrapping Up
We have:
- Containerized a two‑tier Flask + MySQL app using a Dockerfile.
- Built and run the containers individually, then networked them.
- Pushed the Flask image to Docker Hub for easy distribution.
- Orchestrated the full stack with Docker Compose, handling startup order, persistent storage, and environment configuration.
This workflow mirrors a production‑grade DevOps pipeline and provides a solid stepping stone toward Kubernetes, Helm, and full‑scale AWS deployments. Happy containerizing!
Single Command & Next Steps
With this workflow, you can share, run, and scale your applications anywhere, from your local machine to the cloud. 🚀
Next Steps
- Explore Kubernetes and Helm to orchestrate larger applications.
- Deploy them on cloud platforms like AWS to take your containerization skills to the next level!
Keep experimenting, keep sharing, and let’s build smarter, faster, and scalable applications together.