Designing Detection‑as‑Code Without a SIEM

Published: (March 10, 2026 at 05:44 PM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Most people learn detection engineering inside a SIEM. I wanted to learn it without one.

Not because SIEMs aren’t useful, but because they often hide the real thinking behind dashboards, connectors, and pre‑built rules.

So I built BluePhoenix, a detection‑as‑code lab designed to answer a simple question:

What does detection engineering look like when you remove the platform and focus purely on behaviour, logic, and engineering discipline?

Why I Built This

I wanted to understand detection engineering at its core, without relying on a SIEM to do the heavy lifting. That meant learning how to:

  • Express attacker behaviour as logic
  • Validate detections without a UI
  • Tune signal quality without dashboards
  • Build repeatable engineering patterns instead of one‑off rules

Removing enterprise tooling forced me to confront the mechanics directly. It made detection engineering feel like engineering again.

What Detection‑as‑Code Really Means

In BluePhoenix, detections behave like software:

  • Version‑controlled
  • Reviewed
  • Tested
  • Validated
  • Documented

Each rule is a structured YAML file containing:

  • Clear logic
  • ATT&CK mapping
  • Expected behaviour
  • Test cases
  • Metadata

This makes detections auditable, portable, and maintainable—the same qualities expected in real engineering teams.

The BluePhoenix Approach

I designed the lab around three principles:

  1. MITRE ATT&CK alignment – Every rule maps to a technique, sub‑technique, and behaviour pattern.
  2. Scope boundaries – No “catch‑all” detections; each rule solves one problem well.
  3. Engineering discipline – Rules are modular, structured, and validated before merging, just like production code.

Example: Instead of a broad “suspicious PowerShell” rule, I wrote a focused detection for encoded command execution, mapped to ATT&CK T1059.001, with clear test cases and expected behaviour. Small, precise, and predictable.

Why I Didn’t Use a SIEM

A SIEM would have made this easier, but also less honest. I avoided one because:

  • SIEMs create false realism in labs
  • Noise levels are artificial
  • Costs limit experimentation
  • You learn the tool, not the discipline

Without a platform, I had to answer uncomfortable questions directly:

  • What makes a signal meaningful?
  • What makes it noisy?
  • How do you validate logic without dashboards?
  • How do you tune without a query engine?

It was uncomfortable — which is exactly why it worked.

Adding CI to Detections

Every detection goes through CI checks:

  • YAML schema validation
  • Required metadata fields
  • ATT&CK mapping checks
  • Naming conventions
  • Structural consistency

Even static rules benefit from CI because consistency is a security control. If your detection library isn’t predictable, your response pipeline won’t be either.

What I Learned

Building detections without a SIEM taught me that clarity and discipline matter more than tooling. Behaviour drives good detections, not vendor features. Structure beats volume. Validation is where most detections fail—not in the logic, but in the assumptions behind it.

In production I’d add real telemetry, data‑quality checks, platform‑specific tuning, and automation for enrichment and response. But the core thinking wouldn’t change.

Final Thoughts

Detection engineering isn’t about SIEMs, dashboards, or connectors. It’s about thinking like an attacker and building like an engineer.

  • Tools change.
  • Platforms change.
  • Behaviours don’t.

Full project on GitHub:

0 views
Back to Blog

Related posts

Read more »