The Intent-Verification Gap in CI/CD: Why Authentication Fails Under Real Attacks
Source: Dev.to
Overview
Modern CI/CD pipelines are built on a deceptively simple assumption:
If an action originates from a valid session token, it must originate from valid human intent.
The assumption feels intuitive. Engineers authenticate using SSO, receive session tokens, and those tokens authorize deployments to production. If the token is valid and the user has the correct role, the system proceeds.
SolarWinds, Codecov, and Log4j demonstrated that this assumption is false in practice.
In all three cases, systems behaved “correctly” from an authorization perspective:
- Credentials were valid
- Tokens were legitimate
- Pipelines executed as designed
Yet catastrophic outcomes occurred.
The Intent‑Verification Gap
This article introduces what I call the Intent‑Verification Gap: the structural failure of modern CI/CD security models to distinguish possession of credentials from conscious human intent. The gap is not theoretical—it is the attack surface exploited by real‑world Advanced Persistent Threats (APTs).
Stochastic Trust Model
Most CI/CD pipelines operate under a stochastic trust model:
- A user authenticates at some point in time.
- A session token persists for hours.
- Actions taken during that window are assumed to reflect ongoing user intent.
This model is probabilistic. It assumes that during the token’s lifetime the user remains in control of their device, network, and execution environment. Modern threat models break that assumption.
Once malware compromises the endpoint, the system cannot distinguish between:
- A human intentionally deploying code
- Malware using the same token to deploy malicious artifacts
From the pipeline’s perspective, both are indistinguishable: the signature is valid, the role is correct, and the authorization check passes. This is not a bug in implementation—it is a flaw in the trust model itself.
Authentication vs. Intent
In security terminology we separate:
| Concept | Question |
|---|---|
| Authentication (AuthN) | Who are you? |
| Authorization (AuthZ) | Are you allowed to do this? |
Neither AuthN nor AuthZ answer the third, more important question:
Did the human consciously intend to perform this specific action at this specific moment?
Typical Pipeline Flow
- Identity Assertion – SSO / token
- Privilege Check –
DEPLOY_PRODrole - Execution – Production changes
This sequence proves authority, not intent. If malware executes a deployment using a cached token, the system functions “correctly” while failing catastrophically from a security perspective. That is the Intent‑Verification Gap.
Real‑World Illustrations
SolarWinds Sunburst
Often framed as a “build system compromise,” the deeper failure was intent verification:
- The build system compiled malicious code.
- It signed the artifact.
- It distributed it to customers.
From the CI/CD pipeline’s perspective, nothing was wrong. The missing question was never asked:
Did a human consciously intend to deploy this specific artifact?
Once the build server was compromised, cryptographic signatures became meaningless. The server signed malware just as happily as it signed legitimate code.
Codecov Breach
The breach persisted for months because there was no immutable forensic trail of what code actually ran in pipelines over time. From the pipeline’s perspective:
- Scripts were downloaded.
- Environment variables were exported.
- Everything executed normally.
Without a tamper‑proof record, we cannot answer:
- What actions were authorized?
- When they occurred?
- What code actually executed?
Security systems that cannot preserve forensic truth cannot reconstruct reality after compromise.
The Dirty Laptop Hypothesis
The modern developer workstation is hostile territory. A typical laptop runs:
- Browser extensions
- Background daemons
- Package managers
- Chat clients
- Build tools
- Remote‑access agents
Any of these can be compromised. Yet most security systems assume that the same machine can:
- Display the approval UI
- Generate cryptographic signatures
- Safely convey intent
Dirty Laptop Hypothesis: Any general‑purpose computing device used for development must be treated as compromised by default.
If approval and signing occur on the same device as development, malware can manipulate what the human sees while signing something else under the hood—collapsing the trust boundary.
Why Process‑Based Controls Fail
Industry responses to supply‑chain attacks are typically procedural:
- More approvals
- More policies
- More compliance checklists
- More training
These are process‑based controls. They fail when the underlying execution environment is compromised:
- A compromised compiler does not respect peer review.
- A compromised build server does not honor managerial sign‑offs.
Physics‑Based Security Counter‑Thesis
Security must be rooted in constraints attackers cannot bypass with software alone. Examples:
- Physical presence
- Hardware‑isolated signing (e.g., HSMs)
- Air‑gapped approval channels
- Immutable storage (e.g., WORM drives)
When security depends on physical properties, attackers must cross domains: digital → physical, dramatically increasing attack cost.
Tokens as Blank Checks
Session tokens behave like blank checks:
- They remain valid for hours.
- They can be replayed.
- They can be exfiltrated.
- They can be proxied by malware.
Tokens collapse temporal context, converting high‑risk actions into low‑entropy signals. This is why token‑based deployment authorization is structurally unsafe under hostile endpoint assumptions.
A deployment should require fresh proof of intent, not inherited authority from an earlier login event.
Shifting Focus: From Identity to Intent
Modern DevSecOps obsessively answers:
Who is this?
But in compromised environments, identity is irrelevant. What truly matters is:
Did this human consciously authorize this spec?
Intent Verification in CI/CD Pipelines
- Intent is an action, not a state.
- Identity is a state, not an action.
Security systems that authenticate identity without verifying intent are blind to the most critical failure mode in modern CI/CD pipelines.
The Intent‑Verification Gap
Once you accept the Intent‑Verification Gap, several architectural requirements follow:
- Approval must be per‑action, not per‑session
- Signing must be physically isolated from the development environment
- Authorization must be cryptographically bound to specific artifacts and environments
- Logs must be immutable by design
- Friction must be proportional to risk, not uniform
These principles form the foundation of intent‑verification architectures.
Key Takeaways
- Identity is a convenience layer.
- Intent is the security boundary.
Until CI/CD systems treat human intent as a first‑class cryptographic primitive, supply‑chain attacks will continue to bypass controls while passing every compliance check.
The future of CI/CD security is not more dashboards.
It is fewer trust assumptions.