The Most Frequent Habit of WhatsApp Users (And How to Handle It)

Published: (February 26, 2026 at 10:28 PM EST)
9 min read
Source: Dev.to

Source: Dev.to

AWS Specialist Solutions Architect
Applied AI @ AWS
Opinions expressed are solely my own and do not express the views or opinions of my employer.

Overview

Learn how to optimize WhatsApp‑to‑Amazon Connect integrations by buffering rapid consecutive messages into a single coherent message. This step‑by‑step guide covers the full architecture using:

  • AWS CDK
  • DynamoDB Streams (with a tumbling window)
  • AWS Lambda
  • AWS End User Messaging Social
  • Amazon Connect

When customers reach out via WhatsApp they rarely send a single message. They type fast:

Hello
I need help
with my order
P12345

Each of those messages triggers a separate webhook event, and without any optimization each one creates a separate chat message in Amazon Connect. The result?

  • A fragmented conversation
  • A confused agent
  • Unnecessary costs

In AI‑powered chats we usually prevent users from sending additional messages while the agent is still processing. With asynchronous, programmatic messages we can’t control that, but we can control how long we wait before answering. This blog not only helps with WhatsApp‑to‑Connect scenarios but also applies to any chat channel that faces the same challenge (SMS, social‑media DMs, etc.).

You’ll learn how to solve this with a message‑buffering layer that aggregates rapid consecutive WhatsApp messages into a single, coherent message before forwarding them to Amazon Connect.

Code repository:

Architecture Summary

A buffering layer between AWS End User Messaging Social and Amazon Connect that:

  1. Captures incoming WhatsApp messages in a DynamoDB table.
  2. Uses DynamoDB Streams with a tumbling window to buffer messages.
  3. Aggregates consecutive text messages from the same sender into one.
  4. Forwards the combined message to Amazon Connect as a single chat message.

Result: Agents see a clean, natural conversation instead of a flood of fragmented messages.

Data Flow

  1. WhatsApp → AWS End User Messaging Social → publishes to an SNS topic.
  2. Lambda on_raw_messages stores each message in DynamoDB table raw_messages.
  3. DynamoDB Streams trigger Lambda message_aggregator using a tumbling window as a buffer.
  4. The aggregator groups, sorts, and concatenates text messages from the same sender.
  5. The aggregated message is forwarded to the WhatsApp event handler, which creates/updates the Amazon Connect chat session.

Problem Illustration

When users send multiple messages quickly:

Message #Content
1Hello
2I need help
3with my order
4P12345

Without buffering each message becomes a separate Amazon Connect chat message, leading to:

  • Fragmented conversation that’s hard for agents to follow.
  • Higher costs (each message is billed individually).
  • Multiple downstream Lambda invocations.

DynamoDB Table: raw_messages

  • Partition key: from (sender phone number)
  • Sort key: id (message ID)
  • TTL: enabled for automatic cleanup
  • Streams: enabled with a tumbling window

Using from as the partition key guarantees that messages from the same user are stored together and fall into the same shard, ensuring sequential processing by the stream.

Lambda: Store Raw Messages

import json, decimal, os, boto3

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.getenv('RAW_MESSAGES_TABLE'))

def lambda_handler(event, context):
    records = event.get("Records", [])

    for record in records:
        sns = record.get("Sns", {})
        sns_message = json.loads(sns.get("Message", "{}"), parse_float=decimal.Decimal)
        webhook_entry = json.loads(sns_message.get("whatsAppWebhookEntry", "{}"))

        for change in webhook_entry.get("changes", []):
            value = change.get("value", {})
            metadata = value.get("metadata", {})
            contacts = value.get("contacts", [])

            for message in value.get("messages", []):
                item = message.copy()
                item["metadata"] = metadata
                item["messaging_product"] = value.get("messaging_product")

                # Store in DynamoDB
                table.put_item(Item=item)

    return {'statusCode': 200}

Buffering Strategy: Tumbling Window

The tumbling window is the key to the buffering strategy. DynamoDB Streams trigger the message_aggregator Lambda, but instead of invoking it for every single record, the window waits a configurable number of seconds (default 20 s) before invoking the function with all accumulated records in that window.

  • One Lambda per shard → messages from the same user within the window are processed together.

Aggregator Lambda

The aggregator groups messages by sender, sorts them by timestamp, and concatenates consecutive text messages with newlines. Non‑text messages (images, audio, documents, etc.) are preserved as separate items.

Input Example

Message 1: Hello
Message 2: I need help
Message 3: with my order
Message 4: P12345

Output (single aggregated message)

Hello
I need help
with my order
P12345

Core Aggregation Logic

import os, json, boto3
from boto3.dynamodb.types import TypeDeserializer

lambda_client = boto3.client('lambda')
deserializer = TypeDeserializer()

def deserialize_dynamodb(image):
    return {k: deserializer.deserialize(v) for k, v in image.items()}

def aggregate_all_messages(records):
    # Group by sender
    grouped = {}
    for rec in records:
        sender = rec['from']
        grouped.setdefault(sender, []).append(rec)

    aggregated = []
    for sender, msgs in grouped.items():
        # Sort by timestamp (or id) to preserve order
        msgs.sort(key=lambda x: x.get('timestamp') or x.get('id'))
        # Concatenate consecutive text messages
        text_parts = [m['body'] for m in msgs if m.get('type') == 'text']
        aggregated_body = "\n".join(text_parts)

        # Preserve non‑text items (optional handling)
        non_text = [m for m in msgs if m.get('type') != 'text']

        agg_msg = {
            "from": sender,
            "aggregated_body": aggregated_body,
            "original_messages": msgs,
            "non_text_attachments": non_text
        }
        aggregated.append(agg_msg)

    return aggregated

def lambda_handler(event, context):
    raw_records = event.get("Records", [])
    records = []

    for record in raw_records:
        if record.get("eventName") == "INSERT":
            new_image = record.get("dynamodb", {}).get("NewImage", {})
            deserialized = deserialize_dynamodb(new_image)
            records.append(deserialized)

    if not records:
        return {"state": event.get('state', {})}

    aggregated = aggregate_all_messages(records)

    # Forward each aggregated message to the WhatsApp event handler
    for agg in aggregated:
        lambda_client.invoke(
            FunctionName=os.environ['WHATSAPP_EVENT_HANDLER'],
            InvocationType='Event',
            Payload=json.dumps(agg)
        )

    return {"state": event.get('state', {})}

Final Step: Forward to Amazon Connect

Once aggregated, the Lambda asynchronously invokes the WhatsApp event handler, which creates or updates the Amazon Connect chat session with the combined message. The agent now sees one chat entry instead of a flood of fragmented messages.

Takeaways

  • Buffering rapid consecutive messages reduces chat fragmentation.
  • Tumbling windows on DynamoDB Streams provide a simple, serverless way to batch‑process messages.
  • The pattern works for any chat channel (WhatsApp, SMS, social‑media DMs, etc.).

Feel free to explore the full implementation in the GitHub repo linked above. Happy building!

Benefits of Message Buffering

  • Cleaner conversation flow – Multiple rapid messages appear as a single coherent message.
  • Cost optimisation – Fewer downstream messages lower Amazon Connect Chat costs.
  • Automatic cleanup – TTL removes old raw messages automatically.
  • Scalable – DynamoDB Streams handles high throughput (up to 10 000 records per stream).
  • Reliable – Stream processing guarantees at‑least‑once delivery (no messages lost).
  • Example scenario – 1 000 raw messages aggregated into 250 messages (4 : 1 ratio). Messages are answered by a human agent.

Cost Comparison

ComponentWithout BufferingWith BufferingSavings
DynamoDB + Streams≈ $0.0013
Lambda (all functions)≈ $0.00078
Buffering Infrastructure$0.00≈ $0.002
Inbound API Calls1 000 calls250 calls75 % fewer calls
Connect Chat (In) Cost$4.00$1.00$3.00
Total$4.00≈ $1.00≈ $3.00 (75 % reduction)

Note: The total cost includes both Connect inbound/outbound and End‑User Messaging (EUM) inbound/outbound. In this example we only reduce Amazon Connect Chat inbound messages.

  • Connect Chat cost: $0.004 × msg (in) + $0.004 × msg (out) – see the official pricing page.
  • EUM cost: $0.005 × msg (in) + $0.005 × msg (out) – see the official pricing page.

Prerequisites

  1. WhatsApp Business Account (WABA) – create a new one or migrate an existing account to AWS.

    • Have or create a Meta Business Account.
    • Open the AWS End User Messaging (EUM) Social console and link your business account via the embedded Facebook portal.
    • Obtain a phone number that can receive SMS/voice verification and add it to WhatsApp.
    • ⚠️ Do not use a personal WhatsApp number.
  2. Amazon Connect instance – if you don’t have one, follow the Amazon Connect setup guide.

  3. Instance identifiers – locate the following ARNs in the Amazon Connect console:

    arn:aws:connect:::instance/INSTANCE_ID
    arn:aws:connect:::instance/INSTANCE_ID/contact-flow/CONTACT_FLOW_ID
  4. Contact flow – create (or reuse) an inbound contact flow that defines the user experience and publish it.

  5. Region alignment – deploy everything in the same region where your AWS End User Messaging WhatsApp numbers are configured.

Deployment Steps

# Clone the sample repository
git clone https://github.com/aws-samples/sample-whatsapp-end-user-messaging-connect-chat.git
cd sample-whatsapp-end-user-messaging-connect-chat/whatsapp-eum-connect-chat

# Follow the CDK Deployment Guide (README.md) to deploy the stack

After deployment, update the SSM parameter /whatsapp_eum_connect_chat/config with your Amazon Connect details:

{
  "instance_id": "",
  "contact_flow_id": "",
  "chat_duration_minutes": 60,
  "ignore_reactions": "yes",
  "ignore_stickers": "yes"
}

Parameter Reference

ParameterDescription
instance_idYour Amazon Connect Instance ID
contact_flow_idID of the inbound contact flow for chat
chat_duration_minutesHow long the chat session stays active (default = 60)
ignore_reactionsWhether to ignore WhatsApp reactions (default = yes)
ignore_stickersWhether to ignore WhatsApp stickers (default = yes)

Connect the Stack to AWS End‑User Messaging

  1. Retrieve the SNS topic ARN

    • Open AWS Systems Manager → Parameter Store.
    • Copy the value of /whatsapp_eum_connect_chat/topic/in (it starts with arn:aws:sns).
  2. Configure the destination in the EUM Social console

    • Choose Destination → Amazon SNS.
    • Paste the copied Topic ARN.

Optional Tweaks

  • Buffer window – default is 20 seconds. To change it, edit BUFFER_IN_SECONDS in config.py and redeploy:

    BUFFER_IN_SECONDS = 20  # modify as needed (seconds)
  • Testing – Open the Amazon Connect Contact Control Panel (CCP), send a WhatsApp message to the EUM number, and observe rapid messages being aggregated into a single message in Connect.

  • Further enhancements

    • Adjust the buffer window per use‑case (shorter for real‑time, longer for cost savings).
    • Add a Dead‑Letter Queue for failed stream processing.
    • Implement custom aggregation logic (e.g., group images together).
    • Combine with the Agent‑Initiated WhatsApp solution for full bidirectional communication.
  • Project Repository
  • Amazon Connect Administrator Guide
  • Amazon Connect API Reference
  • AWS End User Messaging Social User Guide
  • DynamoDB Streams Developer Guide
  • Build scalable, event‑driven architectures with Amazon DynamoDB and AWS Lambda

AWS Specialist Solutions Architect – Applied AI @ AWS
Opinions expressed are solely my own and do not represent the views of my employer.

0 views
Back to Blog

Related posts

Read more »