Your Cloud Storage Bucket Has 2TB of Data Nobody Touched in 6 Months (You're Paying 16x Too Much) 💾
Source: Dev.to
Cost Comparison
| Class | Cost/GB / month | Min Duration | Best For | Retrieval Fee |
|---|---|---|---|---|
| Standard | $0.020 | None | Hot data, frequent access | Free |
| Nearline | $0.010 | 30 days | Accessed less frequently | — |
| Coldline | $0.004 | 90 days | Infrequent access (quarterly) | — |
| Archive | $0.0012 | 365 days | Rare access (yearly) | — |
Note: All storage classes have the same 11 nines of durability and millisecond‑access latency. Archive storage is not like tape – data is instantly accessible, you just pay a higher retrieval fee.
Why Lifecycle Policies Matter
- Every object starts in Standard and stays there forever unless you intervene.
- GCP does not automatically move data to cheaper tiers.
- Without policies, a 2 TB dataset left untouched for 6 months costs $40 / month instead of the $2.40 it could be.
Lifecycle policies automatically transition objects to cheaper classes as they age—no manual work, scripts, or cron jobs required. Deploy them with Terraform in minutes.
Typical Tier‑Down Pattern
Standard → Nearline → Coldline → Archive → Delete
Terraform Example: Simple Tier‑Down
resource "google_storage_bucket" "data" {
name = "${var.project_id}-app-data"
location = "US"
storage_class = "STANDARD" # start in Standard
# 30 days → Nearline
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "NEARLINE"
}
condition {
age = 30
matches_storage_class = ["STANDARD"]
}
}
# 90 days → Coldline
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "COLDLINE"
}
condition {
age = 90
matches_storage_class = ["NEARLINE"]
}
}
# 365 days → Archive
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "ARCHIVE"
}
condition {
age = 365
matches_storage_class = ["COLDLINE"]
}
}
# 730 days (2 years) → Delete
lifecycle_rule {
action {
type = "Delete"
}
condition {
age = 730
}
}
labels = local.common_labels
}
⚠️ Gotcha: Transitions performed by lifecycle rules do not incur early‑deletion fees. Manually rewriting an object to a different class would trigger both retrieval and early‑deletion charges—let the rules handle it.
Terraform Example: Prefix‑Based Multi‑Tier
resource "google_storage_bucket" "multi_tier" {
name = "${var.project_id}-multi-tier-data"
location = "US"
# ---------- Logs ----------
# 7 days → Nearline
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "NEARLINE"
}
condition {
age = 7
matches_prefix = ["logs/"]
matches_storage_class = ["STANDARD"]
}
}
# 30 days → Coldline
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "COLDLINE"
}
condition {
age = 30
matches_prefix = ["logs/"]
matches_storage_class = ["NEARLINE"]
}
}
# 90 days → Delete
lifecycle_rule {
action {
type = "Delete"
}
condition {
age = 90
matches_prefix = ["logs/"]
}
}
# ---------- Backups ----------
# 30 days → Coldline
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "COLDLINE"
}
condition {
age = 30
matches_prefix = ["backups/"]
matches_storage_class = ["STANDARD"]
}
}
# 90 days → Archive
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "ARCHIVE"
}
condition {
age = 90
matches_prefix = ["backups/"]
matches_storage_class = ["COLDLINE"]
}
}
# 730 days → Delete (keep backups for 2 years)
lifecycle_rule {
action {
type = "Delete"
}
condition {
age = 730
matches_prefix = ["backups/"]
}
}
# ---------- User uploads ----------
# 90 days → Nearline
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "NEARLINE"
}
condition {
age = 90
matches_prefix = ["uploads/"]
matches_storage_class = ["STANDARD"]
}
}
labels = local.common_labels
}
One bucket, three data patterns:
- Logs are aggressively moved down and deleted after 90 days.
- Backups stay hot for a month, then cold, then archived for up to 2 years.
- User uploads remain in Standard for three months before moving to Nearline.
Terraform Example: Versioned Buckets
resource "google_storage_bucket" "versioned" {
name = "${var.project_id}-versioned-data"
location = "US"
versioning {
enabled = true
}
# Keep only the 3 most recent versions
lifecycle_rule {
action {
type = "Delete"
}
condition {
num_newer_versions = 3
with_state = "ARCHIVED"
}
}
# Delete non‑current versions older than 90 days
lifecycle_rule {
action {
type = "Delete"
}
condition {
age = 90
with_state = "ARCHIVED"
}
}
# Abort incomplete multipart uploads after 7 days
lifecycle_rule {
action {
type = "AbortIncompleteMultipartUpload"
}
condition {
age = 7
}
}
labels = local.common_labels
}
Why? Versioning protects data, but without cleanup it can double or triple storage costs. The rules above keep the bucket tidy automatically.
Bottom Line
- All classes share the same durability and latency—you’re only paying for the access pattern you need.
- Lifecycle rules are the simplest, safest way to move data to cheaper tiers and eventually delete it.
- Deploy them with Terraform in minutes, and let GCP handle the rest while you sleep.
Cost‑Effective GCP Cloud Storage with Terraform
⚠️ Hidden cost killer
Incomplete multipart uploads are invisible in the console but still count against your storage. If your app does large file uploads and sometimes fails midway, these fragments pile up silently.
The AbortIncompleteMultipartUpload rule is free insurance.
Autoclass – Automatic Tiering
If you don’t know how often data gets accessed, GCP’s Autoclass feature automatically moves objects between storage classes based on actual access patterns.
resource "google_storage_bucket" "auto_tiered" {
name = "${var.project_id}-auto-tiered"
location = "US"
autoclass {
enabled = true
terminal_storage_class = "ARCHIVE" # Lowest tier it can reach
}
labels = local.common_labels
}
Autoclass moves frequently accessed data back up to Standard (lifecycle rules can’t do this) and moves untouched data down through the tiers automatically. There’s a small management fee, but for unpredictable access patterns it saves more than it costs.
When to use Autoclass vs. manual lifecycle rules
| Scenario | Recommended approach |
|---|---|
| Known access pattern (logs, backups) | Manual lifecycle rules |
| Unknown / mixed access patterns | Autoclass |
| Data that might be re‑accessed after months | Autoclass |
| Compliance requirements (must be in a specific class) | Manual lifecycle rules |
| Maximum cost control | Manual lifecycle rules |
Deploy the Same Lifecycle Rules Across Every Team’s Buckets
variable "buckets" {
type = map(object({
location = string
lifecycle_type = string # "logs", "backups", "general", "autoclass"
}))
default = {
"team-alpha-logs" = {
location = "US"
lifecycle_type = "logs"
}
"team-beta-backups" = {
location = "US"
lifecycle_type = "backups"
}
"ml-training-data" = {
location = "US"
lifecycle_type = "autoclass"
}
}
}
locals {
lifecycle_configs = {
logs = {
nearline_age = 7
coldline_age = 30
delete_age = 90
}
backups = {
nearline_age = 30
coldline_age = 90
delete_age = 730
}
general = {
nearline_age = 30
coldline_age = 90
delete_age = 365
}
}
}
Managed bucket (non‑Autoclass)
resource "google_storage_bucket" "managed" {
for_each = {
for k, v in var.buckets : k => v
if v.lifecycle_type != "autoclass"
}
name = "${var.project_id}-${each.key}"
location = each.value.location
storage_class = "STANDARD"
# → Move to NEARLINE
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "NEARLINE"
}
condition {
age = local.lifecycle_configs[each.value.lifecycle_type].nearline_age
matches_storage_class = ["STANDARD"]
}
}
# → Move to COLDLINE
lifecycle_rule {
action {
type = "SetStorageClass"
storage_class = "COLDLINE"
}
condition {
age = local.lifecycle_configs[each.value.lifecycle_type].coldline_age
matches_storage_class = ["NEARLINE"]
}
}
# → Delete
lifecycle_rule {
action {
type = "Delete"
}
condition {
age = local.lifecycle_configs[each.value.lifecycle_type].delete_age
}
}
# → Abort incomplete multipart uploads
lifecycle_rule {
action {
type = "AbortIncompleteMultipartUpload"
}
condition {
age = 7
}
}
labels = local.common_labels
}
New bucket needed? Add one entry to the buckets map and run terraform apply. Lifecycle rules are instantly applied. ✅
Quick‑Win Actions
| Action | Effort | Savings |
|---|---|---|
| Add lifecycle rules to existing log buckets | 5 min | 73‑94 % on log storage |
Enable AbortIncompleteMultipartUpload | 2 min | Stops silent cost leaks |
| Add versioning cleanup rules | 5 min | 50‑70 % on versioned buckets |
| Create reusable lifecycle module | 15 min | Consistent rules org‑wide |
| Enable Autoclass for unknown patterns | 2 min | Auto‑optimized tiers |
Start with your log buckets – they’re usually the biggest offenders (huge volumes of data that nobody reads after a week). 🎯
Pricing Snapshot (US region)
| Class | Price / GB |
|---|---|
| Standard | $0.020 (hot data, frequent access) |
| Nearline | $0.010 (≤ 1×/month, 30‑day min) |
| Coldline | $0.004 (≤ 1×/quarter, 90‑day min) |
| Archive | $0.0012 (≤ 1×/year, 365‑day min) |
All classes have the same durability and latency; only the price differs.
Why lifecycle matters
- No lifecycle → Every object stays in Standard forever → up to 16× higher cost for old data.
- A few minutes of Terraform → Years of waste eliminated. 🪣
Next Steps
- Identify your biggest bucket. Sort by “last accessed” in the console.
- Apply the appropriate lifecycle or Autoclass configuration.
- Monitor cost savings over the next month.
Bottom line: If your buckets lack lifecycle rules, you’re over‑paying. A handful of minutes of Terraform can unlock 80 %+ savings on stale data.
Found this helpful? Follow for more GCP cost‑optimization tips with Terraform! 💬