Basic protections for your S3 buckets

Published: (January 8, 2026 at 03:12 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Encryption

Encryption converts readable information (e.g., a secret master plan) into an unreadable format so that only authorized parties can access it. The details aren’t important here—just know that encryption is essential.

Most encryption you’ll encounter falls into two categories:

CategoryWhat it protects
At restData that is idle in storage
In transitData as it moves between services or devices

“Encryption in use” is a separate topic and is out of scope for this article.

Encryption at Rest

Encryption at rest protects data that sits idle in storage. Think of it as a safe that holds a top‑secret notebook; only those who know the combination can open it.

In Amazon S3 there are two main ways to encrypt objects:

MethodDescription
Server‑sideAWS encrypts the data before writing it to disk.
Client‑sideYou encrypt the data before uploading it to S3.

Good news: Buckets have encryption at rest enabled by default. All objects you upload are automatically protected with Server‑Side Encryption using Amazon S3‑managed keys (SSE‑S3). This is transparent and incurs no extra charge.

  • For most use‑cases, SSE‑S3 is sufficient.
  • If you need more control, you can use a KMS‑managed key (SSE‑KMS) or encrypt client‑side.

Encryption in Transit

Encryption in transit protects data while it travels—e.g., when a file is downloaded from S3 to your device. Without it, anyone intercepting the traffic could read the data.

This is achieved with asymmetric encryption (a public‑key/private‑key pair) that establishes a secure channel:

  • Public key – like a mailbox slot: anyone can drop a letter (encrypt data).
  • Private key – like the key that opens the mailbox: only the owner can read the letter (decrypt data).

S3 buckets can be accessed via:

  • HTTP – unencrypted (not recommended)
  • HTTPS – encrypted (recommended)

Recommendation: Allow access only via HTTPS. Enforce this with a bucket policy that denies non‑TLS requests:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "RestrictToTLSRequestsOnly",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::your-bucket-name",
        "arn:aws:s3:::your-bucket-name/*"
      ],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      }
    }
  ]
}

Replace your-bucket-name with the actual bucket identifier. This policy ensures that any request made over plain HTTP is rejected, guaranteeing that all data transfers use TLS encryption.

Public vs. Private Buckets

In AWS there are two bucket access types:

Access TypeDescription
PublicCan be accessed by anyone on the internet.
PrivateCan be accessed only by authorized identities.

Note: New buckets are private by default. You must take explicit steps to make a bucket public, and it’s easy to make a mistake. Adding a layered defense helps prevent accidental exposure.

Block Public Access

The Block Public Access feature stops unintended public exposure. It can be applied at three scopes:

ScopeEffect
BucketBlocks public access for the specific bucket (enabled by default on new private buckets).
AccountBlocks public access for all buckets in the account.
OrganizationBlocks public access for every bucket in the AWS Organization.

Least Privilege

The principle of least privilege states that you should grant an entity (person, application, or service account) only the minimum permissions required to perform a specific task—no more.

Applying this principle reduces the risk of accidental misuse or malicious activity. To protect your S3 data:

  • Scope IAM permissions carefully so users and services can access only the buckets and actions they truly need.
  • Example: If a bucket stores marketing information, only the Marketing team should have access.

Note: Least privilege is not a one‑off exercise. It requires ongoing maintenance and regular audits as responsibilities change, teams reorganize, or access requirements evolve.

Versioning & MFA Delete

Continue the article here…

Versioning

When versioning is enabled on a bucket, S3 keeps multiple versions of an object instead of permanently replacing or removing it. This allows you to recover previous versions of a file if it is deleted or modified by mistake.

Note: This feature cannot be disabled once it’s enabled on a bucket, and it incurs extra cost due to the additional versions stored.

MFA Delete

Enabling MFA Delete adds an additional safeguard to your S3 bucket. When MFA Delete is turned on, users must supply a valid multi‑factor authentication (MFA) token in addition to their normal credentials before they can perform sensitive actions such as:

  • Permanently deleting an object version
  • Suspending versioning on a bucket

This extra step ensures that even if an attacker obtains a user’s credentials, they cannot carry out destructive operations without physical access to the MFA device.

Summary

Together, versioning and MFA Delete provide an effective safeguard against data loss, whether caused by human error or malicious activity.

Tip: Enable versioning and MFA Delete on your S3 buckets to protect against accidental or malicious deletion.

Back to Blog

Related posts

Read more »

Understand AWS IAM Identifiers

When working with AWS Security, one thing that often confuses beginners is IAM identifiers. You may have seen terms like ARN, UserID, RoleID, and FriendlyName....