Consistently deploying Lambda functions and layers using Terraform

Published: (December 29, 2025 at 01:07 PM EST)
7 min read
Source: Dev.to

Source: Dev.to

The code that accompanies this blog post can be found here.

Introduction

Deploying Lambda functions to AWS using Terraform can be quite a struggle, especially when deploying from multiple environments (which only happens in dev and test environments, am I right?).

Some issues you can encounter are:

  • Lambda functions redeploying at every terraform apply
  • Errors about missing archive files containing the Lambda function files
  • Soft locks in the Terraform state file

In this post I’ll show you a way to consistently deploy Lambda functions only when there are changes to the code, even when deploying from multiple environments.

History

There have been long‑standing issues when deploying Lambda functions using Terraform. Below are some external links with examples of these issues and attempts to tackle them.

Example configuration

I have an AWS Lambda deployed successfully with Terraform:

resource "aws_lambda_function" "lambda" {
  filename                       = "dist/subscriber-lambda.zip"
  function_name                  = "test_get-code"
  role                           = 
  handler                        = "main.handler"
  timeout                        = 14
  reserved_concurrent_executions = 50
  memory_size                    = 128
  runtime                        = "python3.6"
  tags                           = 
  source_code_hash               = "${base64sha256(file("../modules/lambda/lambda-code/main.py"))}"
  kms_key_arn                    = 

  vpc_config {
    subnet_ids         = 
    security_group_ids = 
  }

  environment {
    variables = {
      environment = "dev"
    }
  }
}

When I run terraform plan it says the Lambda resource needs to be updated because the source_code_hash has changed, even though I didn’t modify the Python codebase (which is version‑controlled in the same repo):

~ module.app.module.lambda.aws_lambda_function.lambda
    last_modified:   "2018-10-05T07:10:35.323+0000" => 
    source_code_hash: "jd6U44lfe4124vR0VtyGiz45HFzDHCH7+yTBjvr400s=" => "JJIv/AQoPvpGIg01Ze/YRsteErqR0S6JsqKDNShz1w78"

Trigger updates with null_resource

If you need more control or want to trigger updates based on other resources, use a null_resource:

resource "null_resource" "lambda_update" {
  triggers = {
    code_hash = filebase64sha256("my-function.zip")
  }

  provisioner "local-exec" {
    command = "echo 'Code updated, triggering Lambda deployment...'"
  }
}

resource "aws_lambda_function" "example" {
  # ... other configurations
  depends_on = [null_resource.lambda_update]
}

This example triggers an update whenever the hash of my-function.zip changes.

Issue with aws_lambda_layer_version

Hi All,

We are on Terraform 0.14.6 and experiencing the following issue.

We are providing source_code_hash for the aws_lambda_layer_version. Terraform accepts it but writes a completely different value to the state file.

In the plan the source_code_hash is FyN0P9BvuTm023dkHFaWvAGmyD0rlhujGsPCTqaBGyw=; however, in the state file it becomes c3forIEso3mJh74PY6HrhFK94GfJvQ4zG9rEIgBCBhw=.

When I check the layer in AWS CLI the "CodeSha256" is c3forIEso3mJh74PY6HrhFK94GfJvQ4zG9rEIgBCBhw=.

Based on this it does not matter what kind of source_code_hash I provide; I cannot overwrite the hash of the filename.

Terraform configuration:

resource "aws_lambda_layer_version" "loader" {
  layer_name          = "loader"
  compatible_runtimes = ["python3.8"]

  filename         = "lambda_layer.zip"
  source_code_hash = filebase64sha256("lambda_layer.zip")
}

What you can see in all these examples is that a hash is calculated to determine if the code has changed. That in itself isn’t an issue, but what is an issue is that they all use a base64‑encoded hash.

How to make your Lambda function deployment cross‑environment friendly

The issue with base64 encoding is that the resulting hash for the same data can differ across environments (operating systems, user settings).

The following post describes this issue:

The root cause of this is the difference in packaging on different machines and bad documentation. Well, and an asinine design choice on the AWS side.

source_code_hash gets overwritten by AWS‑provided data upon response.

The documentation for source_code_hash (aka output_base64sha256, filebase64sha256) is misleading:

(String) The base64‑encoded SHA256 checksum of output archive file.

Why would you even want to base64‑encode a hash? The purpose of base64 encoding is to make binary data printable, but a SHA‑256 hash is already printable (hex).

What actually happens is:

  1. Compute the SHA‑256 of the archive.
  2. Take the resulting binary digest, treat its bytes as raw data, and then base64‑encode that binary blob:
sha256sum lambda.zip | xxd -r -p | base64

The problem is that recent zip versions store file permissions, and different umask values on different machines result in different permissions, which in turn produce different archives with different hashes.

When you’re in a team where both Windows and macOS/Linux are being used, you have an additional challenge because the filesystems (and thus the filename of the archive) differ quite a lot.

Getting it to work

After some tinkering, I arrived at the following solution.

(The rest of the post continues with the step‑by‑step implementation of a reproducible packaging process, a custom script to generate a deterministic zip, and Terraform configuration that uses the deterministic hash. Insert your solution here.)

Deploying Lambda Functions with Automatic Redeploy on Source Changes

When I supply the Lambda function code as a directory (containing the required file(s)), I create an archive file from that directory using the archive_file data source.

1. Generate a random UUID that triggers a redeploy

First, we create a random UUID based on all files (excluding ZIP files) in the source directory (and its sub‑directories). An MD5 hash is calculated for each file; if any of those hashes change, the UUID changes and forces a redeploy of the aws_lambda_function resource.

# Create a random UUID which is used to trigger a redeploy of the function.
# The MD5 hash for each file (except ZIP files) will be calculated and,
# if any of those changes, it will trigger a redeploy of the
# aws_lambda_function resource `lambda_function`.
# We cannot rely on a base64 hash, because the seed for that is environment dependent.
resource "random_uuid" "lambda_function" {
  keepers = {
    for filename in setunion(
      toset([for fn in fileset("${path.root}/lambda_function/", "**") : fn if !endswith(fn, ".zip")])
    ) :
    filename => filemd5("${path.root}/lambda_function/${filename}")
  }
}

Note: MD5 is used here only for change detection, not for cryptographic purposes. You could replace it with SHA256 or SHA512 if desired (the extra cost is usually negligible).

2. Create the ZIP archive

We archive the function directory to a location that is ignored by .gitignore.

# Create an archive file of the function directory
data "archive_file" "lambda_function" {
  type        = "zip"
  source_dir  = "${path.root}/lambda_function"
  output_path = "${path.root}/lambda_output/${var.function_name}.zip"
}

3. Deploy the Lambda function

The random_uuid resource is used as a replacement trigger for the Lambda function.
We also ignore changes to the filename attribute, because the absolute path of the archive can differ between machines.

# Create the Lambda function
resource "aws_lambda_function" "lambda_function" {
  function_name = var.function_name
  role          = aws_iam_role.lambda_execution_role.arn
  handler       = "${var.function_name}.${var.handler_name}"
  runtime       = var.runtime
  timeout       = var.timeout
  architectures = var.architectures

  # Use the filename of the archive file as input for the function
  filename = data.archive_file.lambda_function.output_path

  depends_on = [
    aws_iam_role.lambda_execution_role
  ]

  lifecycle {
    replace_triggered_by = [
      # Trigger a replace of the function when any of the function source files changes.
      random_uuid.lambda_function
    ]
    ignore_changes = [
      # Ignore the source filename of the object itself, because that can change between
      # users/machines/operating systems.
      filename
    ]
  }
}

After applying this configuration, running the same code across different environments will not cause unexpected redeployments.

4. Deploying Lambda Layers (with an intermediate S3 object)

The same pattern works for Lambda layers. The difference is that a layer version is built from an S3 object, which is replaced whenever source files change.

# Random UUID for the layer (change detection)
resource "random_uuid" "lambda_layer" {
  keepers = {
    for filename in setunion(
      toset([for fn in fileset("${path.root}/lambda_layer/", "**") : fn if !endswith(fn, ".zip")])
    ) :
    filename => filemd5("${path.root}/lambda_layer/${filename}")
  }
}

# Archive the layer directory
data "archive_file" "lambda_layer" {
  type        = "zip"
  source_dir  = "${path.root}/lambda_layer"
  output_path = "${path.root}/lambda_output/${var.layer_name}.zip"
}

# Store the archive in S3
resource "aws_s3_object" "this" {
  depends_on          = [data.archive_file.lambda_layer]
  key                 = join("/", [for x in [var.s3_key, "${var.layer_name}.zip"] : x if x != null && x != ""])
  bucket              = var.s3_bucket
  source              = data.archive_file.lambda_layer.output_path
  checksum_algorithm  = "SHA256"

  lifecycle {
    replace_triggered_by = [
      random_uuid.lambda_layer
    ]
    ignore_changes = [
      # Ignore the source of the object itself, because that can change between machines/operating systems
      source
    ]
  }
}

# Create the Lambda layer version
resource "aws_lambda_layer_version" "lambda_layer" {
  layer_name          = var.layer_name
  compatible_runtimes = [var.runtime]
  source_code_hash    = aws_s3_object.this.checksum_sha256
  s3_bucket           = aws_s3_object.this.bucket
  s3_key              = aws_s3_object.this.key
}

5. Why this matters

Running IaC changes through a pipeline (as you should for production and staging) eliminates false plan/apply changes caused by differences between contributor machines.

The popular Lambda module by Anton Babenko uses base64 hashes and the archive filename, which can lead to the issues described above. With the approach shown here, those problems are avoided.

PR Update

Working on a PR to fix the Base64 handling for that module.

Conclusion

The goal of this post is to show you how to tackle (at least) two possible issues you might encounter when deploying Lambda functions and/or layers using Terraform.

I hope this provides insight into the causes of these issues and helps you make an informed decision on how to address them.

Back to Blog

Related posts

Read more »

Why Traditional DevOps Stops Scaling

Traditional DevOps works well… until the organization grows. At small scale, a central DevOps team deploying, fixing, and firefighting everything feels efficien...

Terraform Stacks

Overview A collection of production‑ready Terraform Stacks that showcase enterprise patterns across full applications, multi‑region fan‑out, and Kubernetes pla...