Microsoft's AI Read Executives' Confidential Emails for a Month. Microsoft's Security Tools Were Supposed to Stop It.
Source: Dev.to
Overview
A bug tracked as CW1226324 allowed Microsoft 365 Copilot to bypass Data Loss Prevention (DLP) policies and summarize emails marked “Confidential” in users’ Sent Items and Drafts folders. The flaw was active from at least January 21, 2026. Microsoft disclosed it publicly in mid‑February and began rolling out a patch nearly a month after the breach started.
Bug Details
-
Products involved
- Microsoft Information Protection – applies sensitivity labels such as “Confidential,” “Highly Confidential,” and “Internal Only” to documents and emails.
- Copilot – the AI assistant embedded across Microsoft 365 that reads, summarizes, and acts on enterprise data.
-
What went wrong
- The DLP enforcement layer failed to block Copilot from accessing sensitivity‑labeled emails that resided in Sent Items and Drafts.
- Copilot could read, summarize, and surface the contents of those emails in generated responses, even though the labels indicated they should be off‑limits.
- The sensitivity labels themselves existed; Copilot simply did not check them for the affected folder locations.
Impact
- Microsoft has not disclosed how many organizations or users were affected, nor whether any confidential data was actually surfaced to unauthorized users.
- The risk was that authorized Copilot users could receive fragments of confidential content they were not permitted to see, effectively turning the AI into an unintentional insider.
- No data was exfiltrated in the traditional sense, but the silent bypass persisted for about a month.
Context Within Enterprise AI Security
- This incident is the third major case where enterprise AI tools have bypassed their own access‑control frameworks.
- Palo Alto Networks’ Unit 42 2026 Global Incident Response Report (based on 750+ real incidents) found:
- 99 % of cloud identities held excessive permissions.
- Average breach‑to‑exfiltration time compressed to 72 minutes (down from 285 minutes).
- Identity weaknesses contributed to nearly 90 % of investigated incidents.
- While 62 % of organizations experienced a deep‑fake‑related cyberattack in the last year, the Copilot bug represents a different threat: an internal tool acting with more authority than the human it assists.
Immediate Fix
- Microsoft released a patch that restores DLP checks for Copilot in the affected folders.
- The patch is being rolled out, and the company is monitoring its completeness.
Recommendations
Short‑Term Actions
- Audit which sensitivity‑labeled content Copilot can currently access.
- Use Microsoft Purview compliance tools to identify any gaps, while recognizing that the DLP layer itself was the failure point.
Long‑Term Structural Changes
- Shift from a broad‑read, policy‑filtering permission model (inherited from search) to a model where AI assistants only access content at the moment of an explicit user request, performing real‑time policy checks.
- Although architecturally more expensive, this approach prevents silent DLP bypasses from turning AI assistants into insider threats.
Outlook
Microsoft’s patch addresses the specific vulnerability, but the underlying architecture that allowed a month‑long silent bypass remains in use for many enterprise customers. Organizations should evaluate whether they are comfortable deploying AI assistants built on this model, and consider adopting stricter, request‑driven access controls to mitigate future risks.