From On-Premise Monolith to Scalable AWS Architecture: The Ticket Sales Case Study

Published: (January 15, 2026 at 03:14 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Cover image for From On-Premise Monolith to Scalable AWS Architecture: The Ticket Sales Case Study

Fredy Daniel Flores Lemus

The Problem Statement

Imagine the following scenario: a ticket‑sales application residing on a physical server (on‑premise).
Currently, the application is a monolith written in Node.js; it handles persistence in a MySQL database hosted on the same server, and stores static files (like event posters) directly on the local hard drive.

This architecture faces critical issues when tickets for a famous artist go on sale:

  • the server crashes due to traffic spikes,
  • the database gets locked, and
  • images load extremely slowly.

Monolith architecture

To address these root problems, the decision is made to migrate the application to AWS. This is where architecture planning begins, based on the following functional requirements:

RequirementDescription
High Availability (HA)If a server or zone fails, the app must continue operating without interruption.
ScalabilityThe system must handle user load and absorb traffic spikes during major events on demand.
PersistenceTransaction integrity is vital; no sale can be lost.
SecurityThe database must be protected and isolated from public‑internet access.

The Challenge

We need to structure the solution by addressing four fundamental pillars:

  1. Compute – Where do we run the application and how do we manage traffic?
  2. Database – Which service do we use for MySQL and how do we optimize reads without saturating the system?
  3. Static Storage – How do we serve poster images to ensure fast loading?
  4. Network & Security – How do we organize the network (VPC) to protect data while allowing user access to the web?

The Architecture Proposal

Compute

Run the application on EC2 instances managed by an Auto Scaling Group.
An Application Load Balancer (ALB) sits in front to distribute requests among instances that are spread across different Availability Zones (AZs), ensuring high availability.

Database

Use the managed service Amazon RDS for MySQL.
To optimize performance we will evaluate two strategies:

  • Read Replicas – for scaling read‑heavy workloads.
  • Amazon ElastiCache – to cache frequent queries and reduce load on the primary DB.

(We will decide the best option after testing.)

Static Content

Migrate poster images to an Amazon S3 bucket and serve them through Amazon CloudFront (a CDN) to cache content and drastically reduce load times globally.

Network & Security

Implement a three‑tier architecture within a VPC:

TierPlacementPurpose
Load BalancerPublic subnetEntry point for internet traffic
Application serversPrivate subnetRun the Node.js app
DatabasePrivate subnetHost the RDS instance

Use Security Groups to strictly restrict traffic between layers (e.g., only the load balancer can reach the app servers, and only the app servers can reach the database).

AWS three‑tier architecture

AWS three‑tier architecture – Networking

Deep Dive: Distributed‑Systems Challenges

The architecture above meets the infrastructure requirements, but moving from a monolith to a distributed environment exposes two critical logical problems.

1. The User Session

The original application stored the session in the server’s RAM. In the new architecture, the combination of Auto Scaling + Load Balancer means that a request can be routed to a different instance than the one that created the session, causing the user to be logged out unexpectedly.

Losing session

How do we solve this?
Convert the application to be stateless. Instead of storing the session locally, externalize it to Amazon ElastiCache (Redis or Memcached). Being an in‑memory data store, it offers sub‑millisecond latency and ensures that even if a user’s request lands on a different instance, their session remains centrally available.

Stateless session workflow


Ticket purchase flow

2. Data Consistency (Race Condition)

Here we revisit the debate between using Read Replicas or ElastiCache.

User A buys a ticket.
Milliseconds later, User B checks that same seat. If we use Read Replicas, there is a small delay (replication lag) before User A’s purchase is reflected in all copies. This could lead User B to attempt purchasing an already‑sold seat, causing an error or, worse, over‑booking.

Race condition workflow

How do we handle immediate availability without saturating the database?
The ideal solution is ElastiCache (Redis). Read Replicas are not ideal for real‑time stock control due to the aforementioned lag. Instead, Redis allows us to leverage its atomicity. Since Redis processes operations sequentially, it acts as a perfect control mechanism: if multiple purchase requests for the same seat arrive simultaneously, Redis queues them and processes them one by one, allowing only the first transaction to succeed. This solves the race condition and offloads read traffic from the primary database.

Concurrency well handled

Conclusion

Migrating from an on‑premises environment to the cloud isn’t just about moving servers (Lift & Shift); it’s about rethinking how our application handles state and concurrency.

By integrating Amazon ElastiCache (Redis) into our architecture, we didn’t just gain speed in reads—we solved two of the most complex problems in distributed systems:

  1. Session management in stateless applications.
  2. Data integrity during race conditions.

With this architecture, we’ve moved from a server that collapses under the demand of a famous artist to an elastic, robust infrastructure ready to scale automatically according to demand.

Back to Blog

Related posts

Read more »

Rapg: TUI-based Secret Manager

We've all been there. You join a new project, and the first thing you hear is: > 'Check the pinned message in Slack for the .env file.' Or you have several .env...

Technology is an Enabler, not a Saviour

Why clarity of thinking matters more than the tools you use Technology is often treated as a magic switch—flip it on, and everything improves. New software, pl...