Measuring What Matters: Adding Multiple Dimension Sets to AWS Lambda Powertools

Published: (January 12, 2026 at 06:36 PM EST)
5 min read
Source: Dev.to

Source: Dev.to

Most production systems don’t fail because they lack metrics – they fail because the metrics they do have flatten reality.

Over time I kept seeing the same pattern across teams and architectures:

  • Engineers have plenty of dashboards, yet they struggle to answer simple questions during incidents.
  • The data is there, but it’s aggregated in ways that hide meaningful differences.

This is the problem that led to the addition of multiple dimension sets in AWS Lambda Powertools for Python.

The Real Problem: Aggregation, Not Instrumentation

CloudWatch’s Embedded Metric Format (EMF) has long supported dimensional metrics. In theory, this lets teams slice metrics by environment, region, customer type, or deployment shape. In practice, most teams are forced to choose one aggregation view per metric emission.

You can measure latency by:

  • service + region, or
  • service + environment, or
  • service + customer_type

but not all of them at once unless you emit the same metric repeatedly with different dimension combinations.

Why this trade‑off hurts

  • Metrics get duplicated – the same value is sent multiple times with different dimension sets.
  • Code becomes verbose and fragile – every new aggregation path adds more emission logic.
  • CloudWatch costs increase – each additional metric‑dimension combination incurs extra storage and ingestion fees.
  • Important aggregation paths are missing when you need them most, reducing visibility during incidents.

The result isn’t just inefficiency; it erodes confidence when incidents occur.

The Feature Request That Captured the Pattern

This limitation wasn’t theoretical. In early 2025 a community member opened a feature request in the AWS Lambda Powertools repository:

“Add support for multiple dimension sets to the same Metrics instance”
(Issue #6198)

Use case

  • A Lambda deployed across multiple regions and environments
  • Metrics that needed to be aggregated by environment, region, and both
  • One metric value, many meaningful views

The request also highlighted an important fact: the EMF specification already supports this. The Dimensions field in EMF is defined as an array of arrays, each inner array representing a different aggregation view. Other Powertools runtimes (TypeScript, Java, .NET) already exposed this capability—Python didn’t.

From Feature Request to Production‑Ready Implementation

After maintainers aligned on the approach, I picked up the work to implement this feature for the Python runtime. The goal wasn’t to invent something new; it was to:

  • Align Python with the EMF specification.
  • Reach feature parity with other Powertools runtimes.
  • Deliver a clean, intuitive API that feels natural to existing users.

Design Principles

Before touching code, a few constraints guided the implementation:

  • Backward compatibility – existing add_dimension() behavior must remain unchanged.
  • Clear mental model – no hidden side effects or ambiguous APIs.
  • Spec‑aligned output – serialized EMF must match CloudWatch expectations.
  • Production safety – strict validation and cleanup between invocations.

The Resulting API

The final design mirrors the proven pattern from the TypeScript implementation:

  • add_dimension() → adds to the primary dimension set.
  • add_dimensions() → creates a new aggregation view.

Example usage

from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit

metrics = Metrics(namespace="ServerlessAirline", service="booking")

# Create additional dimension sets
metrics.add_dimensions({"environment": "prod", "region": "us-east-1"})
metrics.add_dimensions({"environment": "prod"})
metrics.add_dimensions({"region": "us-east-1"})

# Emit a single metric
metrics.add_metric(
    name="SuccessfulRequests",
    unit=MetricUnit.Count,
    value=100,
)

With a single metric emission, CloudWatch can now aggregate across:

  • environment + region
  • environment only
  • region only

No duplicate metrics, no parallel pipelines, no guesswork.

What Changed Under the Hood

The implementation introduced several key updates:

  • Multiple dimension sets are now tracked internally.
  • EMF serialization has been updated to emit all dimension arrays.
  • Default dimensions are automatically included.
  • CloudWatch’s 30‑dimension limit is now enforced.
  • Duplicate keys are handled deterministically – “last value wins.”
  • Dimension state is cleared safely between invocations.

Test Coverage

The change shipped with 13 new tests, covering:

AreaWhat’s Tested
Multiple dimension set creationCreation, retrieval, and serialization of several dimension sets.
Validation & edge casesHandling of duplicate keys, exceeding the 30‑dimension limit, and empty inputs.
Integration with existing metricsInteraction with other metric features (e.g., counters, timers).
High‑resolution metrics compatibilityEnsuring dimension handling works with high‑resolution metric data.

All existing tests passed, code‑quality checks succeeded, and maintainers approved the change for merge.

Why This Matters in Production

This feature doesn’t add more metrics—it makes existing metrics more truthful. When teams can express multiple aggregation views at the point of emission:

  • Incident response becomes faster
  • Dashboards become simpler
  • Alerting becomes more precise
  • Engineers trust what they see

Metrics are contracts.
If they can’t reflect how users actually experience the system, they quietly fail.

Multiple dimension sets don’t eliminate operational problems, but they remove a blind spot that many teams didn’t realize they had.

The full implementation, tests, and maintainer review can be found in the merged pull request:
https://github.com/aws-powertools/powertools-lambda-python/pull/7848

Open Source as Shared Problem‑Solving

What made this contribution meaningful wasn’t just the code—it was the process:

  • A well‑documented community feature request
  • Maintainer collaboration across runtimes
  • Alignment with existing specifications
  • A solution designed for long‑term maintainability

This is open source at its best: turning recurring operational pain into shared infrastructure.

Measuring What Actually Matters

Reliability isn’t about collecting more data; it’s about choosing the signals that deserve to exist.

This change helps teams measure systems the way users experience them — not just the way dashboards prefer. And that difference matters.

If you’re duplicating EMF emissions just to get different aggregation views, this should make your metrics simpler, clearer, and more reliable.

If you run into edge cases, open an issue.

That’s how this ecosystem keeps improving.

Back to Blog

Related posts

Read more »