Day 3 of 30 Days of AWS Terraform — Creating Your First S3 Bucket with Terraform

Published: (December 20, 2025 at 12:47 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

Cover image for Day 3 of 30 Days of AWS Terraform — Creating Your First S3 Bucket with Terraform

Welcome to Day 3 of my 30 Days of AWS Terraform Challenge!
Today I wrote my first Terraform configuration that actually creates a real AWS resource—a S3 bucket. This simple example lays the foundation for every cloud‑automation task we’ll tackle later.

Why Start with S3?

Amazon S3 is one of the simplest services to automate. It doesn’t require VPCs, networking, or complex dependencies, making it perfect for understanding:

  • How Terraform resources are written
  • How provider blocks work
  • How to run Terraform commands

It also shows how a state file tracks your AWS infrastructure—an ideal first step for learning Infrastructure as Code.

Terraform state diagram

Folder Setup

Create a new folder for the day’s work:

day03/

Inside it, add a file named main.tf. Terraform only cares that the file ends with .tf; the name itself is irrelevant.

Writing the S3 Bucket Configuration

From the official Terraform documentation, a basic S3 bucket resource looks like this:

resource "aws_s3_bucket" "firstbucket" {
  bucket = "my-demo-bucket-123"
  tags = {
    Name        = "MyBucket"
    Environment = "Dev"
  }
}

Terraform configuration in editor

What this means

  • aws_s3_bucket → Terraform resource type
  • firstbucket → internal name used for referencing
  • bucket → must be globally unique
  • tags → key‑value metadata

Just like that, our infrastructure is defined as code.

Running the Terraform Workflow

Terraform follows a predictable four‑step workflow.

1. terraform init

Downloads the AWS provider plugin and prepares the working directory.

terraform init

Run this whenever you create a new folder or add a new provider.

2. terraform plan

Shows a dry‑run of the changes Terraform will make.

terraform plan

For the example above you’ll see:

Plan: 1 to add, 0 to change, 0 to destroy.

3. terraform apply

Creates the bucket.

terraform apply

Terraform will prompt for confirmation:

Enter a value: yes

Or skip the prompt:

terraform apply -auto-approve

After a few seconds the new S3 bucket appears in the AWS console.

4. terraform destroy

Deletes everything you created.

terraform destroy

Or automatically approve:

terraform destroy -auto-approve

This “build → modify → destroy” cycle is a core part of real DevOps workflows.

How Terraform Detects Changes

Terraform tracks all created resources in a local file called terraform.tfstate.

If you modify the code, e.g.:

tags = {
  Name = "MyBucket 2.0"
}

and run terraform plan again, Terraform compares:

  • Desired state – the .tf files
  • Actual state – the resources that exist in AWS

You’ll see output such as:

Plan: 0 to add, 1 to change, 0 to destroy.

This state‑management capability is what makes Terraform powerful.

Key Learnings From Day 3

  • Using official Terraform docs effectively
  • Understanding resource and provider blocks
  • Running init, plan, apply, and destroy
  • Importance of globally unique S3 bucket names
  • How the Terraform state file tracks real AWS infrastructure
  • How Terraform automatically identifies changes and updates resources

Final Thoughts

Day 3 was the moment Terraform “clicked” for me. Seeing an actual AWS resource created from a simple .tf file feels like unlocking a new super‑power. Terraform removes manual clicking and turns infrastructure into repeatable, version‑controlled automation—something every DevOps engineer must master.

Back to Blog

Related posts

Read more »

terraform advanced

Why Terraform? Terraform is used to automate cloud infrastructure so humans don’t manually create: - VPCs - Subnets - Security Groups - ECS clusters -...