SQS to Lambda vs API

Published: (January 9, 2026 at 12:38 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

Overview

I’m evaluating whether to have an API service handle DynamoDB stream events directly, instead of using a Lambda function as a proxy. The goal is to reduce the number of “glue” Lambdas we need to build and maintain.

Current Architecture

  1. Client requestDELETE /account/:accountId is sent to the accounts-resource-api.
  2. The API writes to DynamoDB.
  3. A DynamoDB stream triggers a Lambda via EventBridge.
  4. The Lambda pushes messages to an SQS queue.
  5. A worker (another Lambda) reads from the queue and calls DELETE /account/:accountId for each child account.

Proposed Architecture

  • Remove the intermediate Lambda.
  • Let the accounts-resource-api subscribe to the DynamoDB stream (EventBridge → SQS) and process the deletions itself.
  • Use an npm library such as sqs‑consumer (Node.js) inside the API to handle SQS polling and message processing.

Pros and Cons

AspectLambda‑based WorkerAPI‑direct Processing
Operational overheadSeparate deployment unit; extra IAM permissions; additional monitoring.Fewer Lambda functions; less “glue” code.
ScalabilityLambda scales automatically with SQS backlog; each invocation is isolated.API must handle polling and concurrent processing; scaling depends on API’s autoscaling configuration.
Cold start latencyMay add latency on first invocation, but usually negligible for short tasks.No cold start for the API itself, but polling loop runs continuously, potentially keeping resources warm.
Error handling & DLQBuilt‑in support for dead‑letter queues and retry policies via SQS → Lambda integration.Must implement DLQ handling, retries, and back‑off logic manually in the API code.
Resource consumptionLambda runs only when there are messages; cost is per‑invocation.API instance may be idle while polling, incurring compute cost even when no work is present.
ObservabilityCloudWatch logs per Lambda invocation; easy to trace.Need to add logging, metrics, and tracing inside the API’s consumer loop.
Security surfaceSeparate role with least‑privilege permissions for the Lambda.API service needs additional permissions to read from SQS and possibly to write to DLQ.

Specific Concerns

Dead‑Letter Queue (DLQ) Management

  • Lambda: You can configure the Lambda trigger to automatically move failed messages to a DLQ after a configurable number of retries.
  • API: You must write code to catch processing errors, decide when to re‑queue, and explicitly send messages to a DLQ. This adds complexity and potential for bugs.

Additional Load on the API (Polling)

  • Polling SQS from within the API means the service will maintain long‑running connections to SQS.
  • This can increase CPU and memory usage, especially if the poll interval is aggressive.
  • Autoscaling policies need to account for the baseline load of the consumer loop, not just request traffic.

Maintenance Overhead

  • A Lambda that simply forwards to an API endpoint is lightweight, but it still requires deployment, versioning, and monitoring.
  • Consolidating the logic into the API reduces the number of deployable units, but shifts the responsibility for reliability, scaling, and error handling onto the API codebase.

Recommendations

  1. Start with Lambda if you need:

    • Simple, out‑of‑the‑box DLQ handling.
    • Automatic scaling based on queue depth.
    • Clear separation of concerns (API handles HTTP requests; Lambda handles background processing).
  2. Consider API‑direct processing only if:

    • Your API already runs on a platform that scales horizontally (e.g., ECS, EKS, Fargate) and can comfortably handle the extra polling load.
    • You are prepared to implement robust retry, back‑off, and DLQ logic in the service.
    • Reducing the number of Lambda functions provides a measurable operational benefit (e.g., cost, deployment complexity).
  3. Hybrid approach – keep a thin Lambda that forwards to a dedicated worker service (e.g., a containerized consumer) rather than the public API. This gives you the scaling benefits of a separate worker while avoiding “glue” Lambdas that call the same API endpoints.

Takeaways

  • Using Lambda as a bridge between DynamoDB streams and SQS is a well‑established pattern with strong built‑in reliability features.
  • Directly embedding SQS consumption in the API is feasible but requires careful handling of DLQs, retries, and scaling.
  • Evaluate the trade‑offs in the context of your team’s operational maturity, cost model, and the expected volume of child‑account deletions.

Feel free to share any experiences you have with either pattern, especially regarding long‑running pollers, DLQ handling, and overall system reliability.

Back to Blog

Related posts

Read more »

Hello, Newbie Here.

Hi! I'm falling back into the realm of S.T.E.M. I enjoy learning about energy systems, science, technology, engineering, and math as well. One of the projects I...