[Paper] Cost-Performance Analysis of Cloud-Based Retail Point-of-Sale Systems: A Comparative Study of Google Cloud Platform and Microsoft Azure

Published: (January 1, 2026 at 08:54 PM EST)
3 min read
Source: arXiv

Source: arXiv - 2601.00530v1

Overview

Retailers are racing to move their point‑of‑sale (POS) software to the cloud, but choosing the right provider can feel like a gamble. In this paper, Ravi Teja Pagidoju presents a repeatable, code‑driven benchmark that pits Google Cloud Platform (GCP) against Microsoft Azure on real‑world POS workloads. By using free‑tier resources and open‑source tooling, the study delivers concrete latency, throughput, and cost numbers that small merchants and developers can actually act on.

Key Contributions

  • Open, reproducible benchmarking framework for POS workloads (all scripts and data are publicly available).
  • Side‑by‑side performance comparison of GCP and Azure using identical API endpoints and traffic patterns.
  • Cost‑performance analysis that translates raw resource usage into real‑world operational expenses despite the free‑tier environment.
  • Architectural deep‑dive explaining why each cloud provider behaves the way it does for POS‑specific workloads.
  • Practical decision‑making guide for retailers evaluating cloud‑based POS deployments.

Methodology

  1. Workload definition – The author modeled a typical retail POS transaction (product lookup, price calculation, inventory check, and receipt generation) as a set of RESTful API calls.
  2. Benchmark harness – An open‑source Python suite (based on locust and requests) generated a controllable stream of concurrent requests, measuring response latency, success rate, and throughput.
  3. Deployment – Identical micro‑service stacks (Docker containers behind a load balancer) were provisioned on GCP’s Cloud Run (free tier) and Azure’s Container Apps (free tier).
  4. Metrics collection – Cloud‑native monitoring (Stackdriver, Azure Monitor) captured CPU, memory, and network usage; the benchmark suite logged per‑request latency.
  5. Cost estimation – Since free‑tier usage incurs no billing, the author multiplied observed resource consumption by the current public‑hourly rates for equivalent paid instances, yielding an “effective cost per transaction.”
  6. Reproducibility – All configuration files, scripts, and raw CSV outputs are version‑controlled and referenced in the paper’s appendix.

Results & Findings

MetricGCPAzure
Baseline latency (99th‑pct)112 ms146 ms (≈ 23 % slower)
Steady‑state throughput1,200 req/s1,180 req/s (≈ 1.7 % lower)
Cost per 1 M transactions$0.87$0.50 (≈ 71.9 % cheaper)
Scalability (max load before 5 % error)2,500 req/s2,300 req/s
  • Speed advantage: GCP’s serverless offering (Cloud Run) delivered noticeably lower response times under light to moderate load, thanks to faster cold‑start times and more aggressive request routing.
  • Cost advantage: Azure’s pricing model for the equivalent container service (pay‑as‑you‑go CPU‑seconds) resulted in a substantially lower effective cost when the system operated at a steady, high‑volume state.
  • Architectural factors: Differences stemmed from network stack implementations, default request timeouts, and the way each platform auto‑scales containers (GCP uses per‑request concurrency, Azure scales more conservatively).

Practical Implications

  • Small retailers can prototype POS systems on either platform without upfront spend, but should consider GCP if ultra‑low latency (e.g., in‑store checkout) is a priority.
  • High‑volume chains that run dozens of stores simultaneously may achieve meaningful savings by leaning toward Azure, especially when workloads stay near steady‑state levels.
  • Developers gain a ready‑made benchmarking suite to test custom POS extensions (e.g., loyalty‑program APIs) across clouds before committing to a vendor.
  • Ops teams can adopt the cost‑estimation technique to forecast monthly cloud bills based on observed traffic patterns, turning “free‑tier” experiments into reliable budgeting tools.
  • Vendor negotiations: Armed with concrete latency and cost numbers, retailers can negotiate better SLAs or request specific instance types that align with the study’s findings.

Limitations & Future Work

  • Free‑tier constraints: The experiments were limited to the resource caps of free tiers, which may not fully reflect performance under larger, production‑grade clusters.
  • Single‑region focus: Benchmarks were run in North‑America regions; latency and pricing can vary globally.
  • Workload scope: The study modeled a relatively simple POS transaction; more complex scenarios (e.g., real‑time inventory sync across multiple stores) were not evaluated.
  • Future directions suggested include testing multi‑region deployments, incorporating other cloud providers (AWS, IBM Cloud), and extending the benchmark to cover end‑to‑end checkout flows that involve payment gateways and third‑party services.

Authors

  • Ravi Teja Pagidoju

Paper Information

  • arXiv ID: 2601.00530v1
  • Categories: cs.DC, cs.SE
  • Published: January 2, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »