Solved: What’s the most unexpectedly expensive thing in your Azure bill lately?

Published: (February 11, 2026 at 01:39 AM EST)
6 min read
Source: Dev.to

Source: Dev.to

Executive Summary

TL;DR: Azure Log Analytics can unexpectedly become a major cloud expense due to default diagnostic settings ingesting excessive data. This guide provides three strategies—quick fixes like setting daily caps, permanent fixes using granular Diagnostic Settings and Data Collection Rules (DCRs), and a “nuclear” architectural split into hot/cold data paths—to effectively manage and reduce ingestion costs.

The Problem

Default diagnostic logging in Azure services often leads to unexpectedly high Log Analytics ingestion costs by sending non‑critical data.

  • Quick Fix – Identify noisy data types with KQL queries and set a temporary daily ingestion cap in the Log Analytics workspace settings.
  • Permanent Fix – Apply granular control over Diagnostic Settings, selecting only valuable log categories, and leverage DCRs for pre‑ingestion transformation and filtering.
  • Nuclear Option – Re‑architect by splitting data streams into:
    • Hot Path – Lean Log Analytics for real‑time needs.
    • Cold Path – Azure Storage Account for cost‑effective long‑term retention.

Azure’s Log Analytics can silently become your biggest cloud expense if not properly managed. This guide offers three practical strategies, from quick fixes to permanent architectural changes, to control runaway data ingestion costs and get your bill back in line.

A Real‑World Story

I remember it like it was yesterday. We’d just rolled out a slick new microservice, auth-svc-prod. One of our sharp junior engineers—let’s call him Ben—did a textbook job setting up monitoring: App Insights, diagnostic settings on every related resource, all pointed to a central Log Analytics workspace.

A month later, finance pinged me on Slack with a screenshot of our Azure bill and a single question:

“What is ‘Log Ingestion’ and why does it cost more than the VMs for our entire production environment?”

Ben hadn’t done anything wrong; he’d followed the documentation perfectly. And that, right there, is the trap.

Default diagnostic logging is the silent killer of Azure budgets. When you stand up a new environment, it’s tempting to click “Send to Log Analytics” and enable all categories. Azure makes it easy, but many services are incredibly chatty by default:

ServiceNoisy Category
App ServiceEvery successful health‑probe ping
Azure FirewallEvery allowed packet
Storage AccountEvery successful read operation

Individually these are tiny drops of data, but with millions of transactions per hour they become a fire hose aimed directly at your Log Analytics workspace—and you pay for every gigabyte.

The root cause isn’t a bug; it’s a misalignment of priorities. Default settings are optimized for maximum visibility, not cost. It’s up to us, the engineers in the trenches, to tune that fire hose down to a manageable—and valuable—stream of data.

Three Levels of Response

1️⃣ Quick Fix – Apply a Tourniquet

Goal: Cap the cost immediately so you have breathing room.

Step 1 – Find the Noisiest Tables

Run this KQL query in your workspace to see which data types are costing the most:

// Find the biggest data hogs (last 30 days)
Usage
| where TimeGenerated > ago(30d)
| where IsBillable == true
| summarize BillableDataGB = sum(Quantity) / 1000 by DataType
| sort by BillableDataGB desc

Step 2 – Set a Daily Ingestion Cap

  1. Open Log Analytics WorkspaceUsage and estimated costs.
  2. Set a daily ingestion cap.

Warning: The cap is blunt—once reached, Azure stops ingesting all data for the rest of the day. Critical security or error logs could be lost. Use this only as a temporary safety valve.

2️⃣ Permanent Fix – Intentional Data Collection

Now that the immediate bleeding has stopped, it’s time for proper surgery: be intentional about what you collect.

Refine Diagnostic Settings

  • Don’t check “AllMetrics” and “AllLogs”.
  • Do select only categories that provide real business or operational value.

Example comparison

Resource“AllLogs” (Expensive)Filtered (Smart)
Azure FirewallAll logs → huge volumeOnly AzureFirewallApplicationRule & AzureFirewallDnsProxy logs
Key VaultAll metrics → unnecessaryOnly AuditEvent logs (security/compliance)

Pro Tip – Use Data Collection Rules (DCRs)

DCRs let you apply a KQL transformation before data is ingested and billed. You can:

  • Drop noisy columns.
  • Filter out entire log entries (e.g., 200 OK health checks).

This is the most powerful tool in your cost‑optimization arsenal.

3️⃣ Nuclear Option – Hot/Cold Architecture

When even filtered data is too voluminous for a “hot” Log Analytics workspace, rethink the architecture.

Split Data Streams

PathPurposeStorageRetention
HotReal‑time alerting & interactive dashboards (errors, security alerts, KPIs)Log Analytics (lean)30‑90 days
ColdLong‑term compliance & forensic analysis (raw logs)Azure Storage (Blob/ADLS)1 year+ (cheaper)

Implementation Sketch

flowchart LR
    subgraph HotPath[Hot Path (Log Analytics)]
        A[High‑priority logs] --> LA[Log Analytics Workspace]
    end
    subgraph ColdPath[Cold Path (Storage)]
        B[All other logs] --> SA[Azure Storage Account]
    end
    A -->|DCR filtering| B
  • Ingest only high‑value data into the hot workspace.
  • Archive the rest to Azure Storage, where you can still query via Azure Synapse, Azure Data Explorer, or Log Analytics “Data Export” if needed.

Recap

LevelActionWhen to Use
Quick FixDaily ingestion cap + identify noisy tablesImmediate cost bleed, need fast stop‑gap
Permanent FixGranular Diagnostic Settings + DCRsOngoing operations, want sustainable cost control
Nuclear OptionHot/Cold split architectureLong‑term compliance data volume too high for hot workspace alone

Cold Path (Cheap)
For everything else—verbose application logs, network flow logs, compliance data—that you need to keep but rarely need to query, send the data directly to an Azure Storage Account instead of Log Analytics. This approach is orders of magnitude cheaper.

When you need to analyze the data, you can:

  • Query it in‑place with Azure Data Explorer, or
  • Re‑hydrate it on‑demand.

This requires a bit more setup, but for large‑scale environments it’s the only sustainable way to keep logging costs under control without sacrificing compliance or visibility.

That surprise bill was a painful but valuable lesson for our team. It forced us to stop treating logging as an afterthought and start treating it as a critical piece of our application architecture. Don’t wait for the finance team to come knocking. Be proactive—check your Usage table today; you might be surprised by what you find.


👉 Read the original article on TechResolve.blog

Support my work
If this article helped you, you can buy me a coffee:

0 views
Back to Blog

Related posts

Read more »