Microsoft says Copilot was summarizing confidential emails without permission

Published: (February 18, 2026 at 04:50 PM EST)
2 min read

Source: Mashable Tech

Overview

A bug in Microsoft 365 and Copilot caused the AI assistant to summarize emails that were explicitly labeled as confidential, according to a report from Bleeping Computer. The issue bypassed organizations’ data loss prevention (DLP) policies, which are intended to protect sensitive information.

Affected Feature

The problem specifically impacted Copilot Chat. Microsoft’s documentation described the bug as causing emails with a confidential label to be “incorrectly processed by Microsoft 365 Copilot chat.” The bug affected the “work tab” chat feature, pulling in and summarizing messages from users’ Sent Items and Drafts folders even when those messages carried sensitivity labels designed to block automated access.

Risks

Copilot Chat is a content‑aware AI assistant rolled out to Microsoft 365 apps such as Word, Excel, Outlook, and PowerPoint for enterprise customers. Integrating AI assistants across products introduces new cybersecurity risks, including:

  • Prompt injection attacks
  • Data‑compliance violations

For a broader discussion of these risks, see the article on creating new types of cybersecurity risks with AI at work.

Detection and Tracking

The issue, tracked internally as CW1226324, was first detected on January 21. Bleeping Computer reported that the bug allowed Copilot to summarize emails that should have been off‑limits due to sensitivity labels.

Microsoft’s Response

Microsoft confirmed that a code issue was responsible and began rolling out a fix in early February. The company continues to monitor the deployment and is reaching out to affected users to verify that the patch is effective. Microsoft has not disclosed how many organizations were impacted, noting that the scope may change as the investigation proceeds.

0 views
Back to Blog

Related posts

Read more »