AWS SRE's First Day with GCP: 7 Surprising Differences

Published: (December 13, 2025 at 09:00 PM EST)
5 min read
Source: Dev.to

Source: Dev.to

Introduction

I’ve worked with AWS for over 10 years across different employers, building and maintaining production infrastructure at scale. Despite hearing about GCP for years, I never seriously explored it—until last weekend.

I decided to start a personal ML project in GCP, thinking “how different could it be?” Four hours later, I was genuinely impressed. Not just by the features, but by how GCP approaches cloud infrastructure fundamentally differently.

Here’s my honest take: When I look back at AWS now, it reminds me of Perl and Jenkins. They still survive in production because they have all the features modern solutions offer—but often through workarounds accumulated over years. AWS works, absolutely. But GCP feels like it was designed with hindsight.

Let me share the 7 differences that surprised me most.

1. Organization Structure: Finally, Hierarchies That Make Sense

In AWS:
Organizations, OUs (Organizational Units), and Control Tower were added as afterthoughts—literal add‑ons introduced years after AWS launched. Managing multi‑account structures feels like retrofitting organization onto a system that wasn’t designed for it.

In GCP:
The hierarchy is natural and intuitive: Organization → Folders → Projects. It’s exactly like organizing your local filesystem. Need to group projects by team? Create a folder. Need to separate dev/staging/prod? Subfolders. Want to apply policies at any level? Just do it.

Why it matters:
When you’re building infrastructure templates for multiple projects, GCP’s structure lets you organize and manage resources the way your brain actually thinks about them. In AWS, you’re constantly fighting the account model.

2. Encryption Keys: Default Keys That Actually Work Across Projects

In AWS:
KMS default keys cannot be shared across accounts. If you want cross‑account encryption, you need customer‑managed keys (CMKs) with complex cross‑account IAM policies. It’s easy to either leave security gaps or accidentally lock yourself out. The permission model is messy.

In GCP:
Default encryption keys work seamlessly across projects within your organization. Need custom keys? The permission model is straightforward and maintainable. You can grant access without the IAM policy gymnastics AWS requires.

Real impact:
I spent an embarrassing amount of time in my AWS days debugging “Access Denied” errors on S3 buckets with KMS encryption across accounts. GCP eliminates this entire class of problems.

3. Cross‑Zone Data Transfer: FREE

In AWS:
Cross‑AZ (Availability Zone) data transfer costs $0.01/GB in each direction—effectively $0.02/GB for round‑trip traffic. For high‑throughput applications, this adds up fast.

In GCP:
Cross‑zone data transfer within the same region is completely free. Zero. Nada.

Why this is huge:

  • Regional Kubernetes clusters? No cost penalty for pod‑to‑pod communication across zones.
  • Multi‑AZ databases? Replication traffic is free.
  • High‑availability architectures don’t cost extra just for being resilient.

This single difference can save thousands of dollars monthly for data‑intensive workloads.

4. Network Resources: Shared VPC Changes Everything

In AWS:
VPCs are tightly bound to individual accounts. Want centralized network management? You need Transit Gateway ($36/month base + data transfer fees), VPC peering, or complex PrivateLink configurations. Each approach has trade‑offs.

In GCP:
Shared VPC lets you create network resources in one project (e.g., an SRE/platform project) and share them with other projects. Developers in application projects don’t even see—let alone manage—the underlying network configuration.

The paradigm shift:

  • Manage all networking in a dedicated “management” project.
  • Grant developers access to their application projects.
  • Developers deploy without touching VPCs, subnets, or firewall rules.
  • Centralized network policies and security controls.

In AWS, achieving this level of separation requires significantly more architectural complexity.

5. Security Groups → Firewall Rules: Making Sense Again

In AWS:
Security Groups are… fine. But the name is confusing (they’re not really “groups”), and the attachment model (per‑instance/ENI) can get messy at scale.

In GCP:
They’re called Firewall Rules. They work at the VPC level. You can target resources by tags, service accounts, or IP ranges. The model just makes sense, especially if you came from traditional system administration.

Why I prefer this:
As someone who managed firewalls before moving to cloud, GCP’s firewall rules feel intuitive. Apply rules at the network level, target specific workloads with tags. It’s how you’d think about network security naturally.

6. Global Load Balancer: A Feature AWS Doesn’t Really Have

In AWS:
You have regional load balancers (ALB, NLB) and Route 53 for DNS‑based routing. Want true global load balancing? You’re building it yourself with health checks, failover, and complicated DNS configurations.

In GCP:
Global HTTP(S) Load Balancer provides:

  • A single anycast IP that routes to the nearest healthy backend globally.
  • Automatic SSL termination at 150+ edge locations worldwide.
  • Seamless integration with Cloud CDN.
  • Built‑in DDoS protection with Cloud Armor.

Real‑world impact:

Imagine a user in Tokyo connecting to an application hosted in Iowa:

  • Without Global LB: SSL handshake in Iowa (≈150 ms latency) → ~450 ms total connection time.
  • With Global LB: SSL handshake in Tokyo (≈5 ms latency) → ~50 ms total connection time.

For global applications, this is a game‑changer. The Global LB + Cloud CDN combination gives you CDN‑like performance for dynamic content, not just static assets.

7. Terraform + CI/CD: Finally, A Clear Pattern

In AWS:

  • Create IAM roles for Terraform.
  • Choose an S3 backend for state (which account?).
  • Use DynamoDB for state locking.
  • Set up cross‑account assume‑role chains for multi‑account deployments.
  • Write custom scripts to manage all of this.

The solution works but feels cobbled together.

In GCP:

  • Create a management project.
  • Enable the Storage API.
  • Create a GCS bucket for Terraform state (versioning and locking built‑in).
  • Use service‑account impersonation.

Done. It’s cleaner, more straightforward, and easier to understand—especially when onboarding new team members.

Cost Comparison

After 4 hours of exploration, I compared pricing for the services I evaluated. GCP is consistently less expensive across the board:

  • Compute instances: 10–20 % cheaper for equivalent specs.
  • Storage: Regional GCS ($0.020/GB) vs Regional S3 ($0.023/GB).
  • Cross‑zone transfer: FREE vs $0.02/GB.
  • Kubernetes: GKE control plane is free (under 15 000 pods); EKS charges $0.10 /hour ($73 /month per cluster).

The savings add up quickly, especially for multi‑cluster or high‑throughput workloads.

The Elephant in the Room: Why Are Companies Still on AWS?

If GCP is cheaper, more intuitive, and has better features for modern architectures—why does AWS dominate the market?

The answer is the same reason some companies still run Perl codebases and Jenkins pipelines in 2025: inertia, existing investment, and organizational momentum.

AWS has:

  • First‑mover advantage: Launched in 2006; most enterprises built on AWS before GCP was viable.
  • Ecosystem lock‑in: Countless third‑party tools, integrations, and Marketplace solutions.
  • Enterprise sales muscle: Deep relationships built over 15+ years.
  • Talent pool: More engineers with AWS experience.
  • Feature breadth: AWS still offers more services overall (though GCP is catching up).

It’s not that AWS is bad. It’s that it’s carrying…

Back to Blog

Related posts

Read more »

Day 13.Create AMI from EC2 Instance

Lab Information The Nautilus DevOps team is migrating a portion of their infrastructure to AWS. To manage the complexity, they are breaking the migration into...

Day 5.Create GP3 Volume

Lab Information The Nautilus DevOps team is strategizing the migration of a portion of their infrastructure to the AWS cloud. Recognizing the scale of this und...

Amazon EC2 Instance Installation.

!Cover image for Amazon EC2 Instance Installation.https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev...