AWS CLI: The Developer's Secret Weapon (And How to Keep It Secure)
Source: Dev.to
Why the Terminal is Your Best Friend for AWS Management
If you’ve been managing AWS resources exclusively through the web console, you’re not wrong—but you might be working harder than you need to. Let me show you why AWS CLI has become the go‑to choice for developers who value speed, automation, and control.
The Web Console is Fine… Until It Isn’t
Don’t get me wrong—the AWS Management Console is beautifully designed. It’s intuitive, visual, and perfect for exploring services you’re learning. Amazon has invested millions into creating an interface that makes cloud computing accessible to everyone, and that’s genuinely commendable.
But here’s what happens in real‑world development scenarios:
The Console Workflow
- Open browser → Wait for page load → Navigate to AWS → Multi‑factor authentication dance → Find the right service from 200+ options → Click through multiple screens → Configure settings one field at a time → Wait for confirmation → Realize you need the exact same configuration in three other regions → Copy settings manually → Repeat for the next resource → Realize you need to do this 47 more times → Question your career choices → Consider becoming a farmer
The CLI Workflow
aws ec2 run-instances --image-id ami-12345678 --count 50 --instance-type t2.micro --key-name MyKeyPair --region us-east-1
One line. Fifty instances. Multiple regions with a simple loop. Five seconds total.
The difference isn’t just speed—it’s a fundamental shift in how you think about infrastructure management. The console trains you to think in clicks. The CLI trains you to think in systems.
Why Smart Developers Choose CLI
1. Speed That Actually Matters
When you’re deploying infrastructure, troubleshooting issues at 2 AM, or managing resources across multiple AWS accounts and regions, every second compounds. With CLI, you can:
- Launch dozens of resources in milliseconds instead of minutes
- Query multiple services simultaneously across regions
- Filter and process output instantly with powerful tools like
jq,grep,awk, orsed - Chain commands together for complex workflows
- Build muscle memory for common operations
Concrete example: Find all EC2 instances across four regions that are running but have been inactive for more than 30 days.
for region in us-east-1 us-west-2 eu-west-1 ap-southeast-1; do
aws ec2 describe-instances --region $region \
--filters "Name=instance-state-name,Values=running" \
--query 'Reservations[*].Instances[*].[InstanceId,LaunchTime]' \
--output text | while read id launch_time; do
# Check if instance is older than 30 days
if [[ $(date -d "$launch_time" +%s) -lt $(date -d '30 days ago' +%s) ]]; then
echo "$region: $id (launched: $launch_time)"
fi
done
done
Two minutes to write. Instant execution. Complete results.
2. Automation and Scripting: Where CLI Becomes Indispensable
The CLI doesn’t just save time—it enables entirely new workflows.
Automated Backup Script
#!/bin/bash
# Daily backup script for all RDS instances
BACKUP_DATE=$(date +%Y%m%d-%H%M%S)
# Get all RDS instances
for db in $(aws rds describe-db-instances \
--query 'DBInstances[*].DBInstanceIdentifier' \
--output text); do
echo "Creating snapshot for $db..."
aws rds create-db-snapshot \
--db-instance-identifier $db \
--db-snapshot-identifier "${db}-backup-${BACKUP_DATE}"
# Tag the snapshot
aws rds add-tags-to-resource \
--resource-name "arn:aws:rds:us-east-1:123456789012:snapshot:${db}-backup-${BACKUP_DATE}" \
--tags Key=AutomatedBackup,Value=true Key=Date,Value=$BACKUP_DATE
# Clean up snapshots older than 30 days
aws rds describe-db-snapshots \
--db-instance-identifier $db \
--query "DBSnapshots[?SnapshotCreateTime<='$(date -d '30 days ago' --iso-8601)'].DBSnapshotIdentifier" \
--output text | while read old_snapshot; do
echo "Deleting old snapshot: $old_snapshot"
aws rds delete-db-snapshot --db-snapshot-identifier $old_snapshot
done
done
echo "Backup process completed at $(date)"
Schedule this with cron for enterprise‑grade backup automation.
Cost Optimization Script
#!/bin/bash
# Stop all EC2 instances tagged "Environment:Development" after 6 PM
CURRENT_HOUR=$(date +%H)
if [ $CURRENT_HOUR -ge 18 ]; then
aws ec2 describe-instances \
--filters "Name=tag:Environment,Values=Development" \
"Name=instance-state-name,Values=running" \
--query 'Reservations[*].Instances[*].InstanceId' \
--output text | while read instance; do
echo "Stopping development instance: $instance"
aws ec2 stop-instances --instance-ids $instance
# Send notification
aws sns publish \
--topic-arn "arn:aws:sns:us-east-1:123456789012:cost-savings" \
--message "Stopped development instance $instance at $(date)"
done
fi
A single script like this can save thousands of dollars per month by automatically shutting down development environments during non‑business hours.
3. Version Control for Infrastructure
Your CLI commands live in scripts. Scripts live in Git. This gives you:
- Full audit history – Every infrastructure change is a Git commit with timestamps and authors.
- Code review processes – Changes go through pull requests before reaching production.
- Rollback capabilities –
git revertbecomes your infrastructure undo button. - Team collaboration – Everyone can see, review, and improve infrastructure code.
- Documentation – The scripts themselves document how your infrastructure works.
Example: VPC Setup Script
#!/bin/bash
# vpc-setup.sh – Creates a complete VPC environment
# Create VPC
VPC_ID=$(aws ec2 create-vpc \
--cidr-block 10.0.0.0/16 \
--tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=Production-VPC}]' \
--query 'Vpc.VpcId' \
--output text)
echo "Created VPC: $VPC_ID"
# Create Internet Gateway
IGW_ID=$(aws ec2 create-internet-gateway \
--tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=Production-IGW}]' \
--query 'InternetGateway.InternetGatewayId' \
--output text)
aws ec2 attach-internet-gateway --vpc-id $VPC_ID --internet-gateway-id $IGW_ID
echo "Created and attached Internet Gateway: $IGW_ID"
# Create public subnet
PUBLIC_SUBNET_ID=$(aws ec2 create-subnet \
--vpc-id $VPC_ID \
--cidr-block 10.0.1.0/24 \
--availability-zone us-east-1a \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=Public-Subnet-1a}]' \
--query 'Subnet.SubnetId' \
--output text)
echo "Created public subnet: $PUBLIC_SUBNET_ID"
# Create private subnet
PRIVATE_SUBNET_ID=$(aws ec2 create-subnet \
--vpc-id $VPC_ID \
--cidr-block 10.0.2.0/24 \
--availability-zone us-east-1a \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=Private-Subnet-1a}]' \
--query 'Subnet.SubnetId' \
--output text)
echo "Created private subnet: $PRIVATE_SUBNET_ID"
# Create route table for public subnet
ROUTE_TABLE_ID=$(aws ec2 create-route-table \
--vpc-id $VPC_ID \
--tag-specifications 'ResourceType=route-table,Tags=[{Key=Name,Value=Public-RT}]' \
--query 'RouteTable.RouteTableId' \
--output text)
echo "Created route table: $ROUTE_TABLE_ID"
By treating infrastructure as code with the AWS CLI, you gain the same benefits developers enjoy when writing application code—repeatability, versioning, and collaboration.