Europe's New War on Privacy

Published: (November 29, 2025 at 04:10 PM EST)
3 min read

Source: Hacker News

Europe’s New War on Privacy

Background

Chat Control, formally the Child Sexual Abuse Regulation, was first proposed by the European Commission in 2022. The original draft would have required email and messenger providers to scan private, even end‑to‑end encrypted, communications to detect child sexual abuse material. While presented as a tool to combat horrific crimes, critics warned it could become a blueprint for generalized surveillance, giving states and EU institutions the ability to scan every private message. A public consultation showed that over 80 % of respondents opposed applying such obligations to encrypted communications.

Recent Developments

In May 2024 the Commission re‑presented the proposal. Several member states—including Germany, Poland, Austria, and the Netherlands—objected. Denmark, holding the rotating presidency of the European Council, drafted a revised version dubbed “Chat Control 2.0.” The new text removes the requirement for mandatory on‑device detection, making scanning “voluntary” for providers.

Coreper (the Committee of Permanent Representatives), the EU’s most powerful yet least visible legislative body, quietly approved the revised version, paving the way for Council adoption as early as December 2024. Digital‑rights campaigner and former MEP Patrick Breyer described the manoeuvre as a “deceptive sleight of hand” that bypasses meaningful democratic debate.

Problematic Features

  1. “Voluntary” mass scanning – The revised text encourages platforms to perform large‑scale scanning on a lasting basis, a practice previously allowed only temporarily.
  2. Mandatory age‑verification – The proposal introduces compulsory age‑verification systems for app stores and private messaging services, effectively outlawing anonymous communication.

Expert Criticism

  • An open letter signed by 18 leading European cybersecurity and privacy academics warned that the proposal poses high societal risks with unclear benefits for children.
  • The letter highlighted the expansion of “voluntary” scanning, including AI‑driven analysis of “grooming” behaviour. Current AI systems cannot reliably distinguish innocent conversation from abusive intent, leading to massive false‑positive rates.
  • Patrick Breyer emphasized that AI cannot reliably differentiate flirtation, sarcasm, or harmless family chat from criminal grooming, creating a digital witch‑hunt that subjects the entire population to suspicion.
  • German federal police report that roughly half of all existing voluntary reports are criminally irrelevant, while the Swiss Federal Police estimate 80 % of machine‑reported content is not illegal. Extending scanning to encrypted content would dramatically increase these false‑positive risks.
  • Article 4 of the compromise proposal obliges providers to implement “all appropriate risk mitigation measures,” a clause that could be used to pressure encrypted messaging services (e.g., WhatsApp, Signal, Telegram) into on‑device scanning before encryption.
  • The Electronic Frontier Foundation warned that this creates a permanent security infrastructure that could become universal. Meta, Google, and Microsoft already scan unencrypted content voluntarily; extending this to encrypted content would require only technical adjustments.
  • A “voluntary” option can quickly become de‑facto compulsory as platforms face reputational, legal, and market pressure to cooperate with authorities.
  • The impact is global: platforms that remain in the EU would have to scan conversations of all users, including those outside the bloc, compromising the privacy of anyone who chats with an EU resident.

Age‑Verification Concerns

  • Critics argue that mandatory age verification is technologically unworkable and inherently invasive. Implementations would rely on biometric and behavioural data, increasing the collection of sensitive personal information.
  • Requiring official identity documents would exclude millions who lack digital IDs or are unwilling to provide such documentation, effectively ending anonymous online communication.

Conclusion

While the removal of mandatory on‑device detection is a modest improvement, the revised “Chat Control 2.0” still normalises large‑scale, state‑mandated monitoring of private communications and threatens to eliminate anonymity online. The proposal’s reliance on “voluntary” scanning, AI‑driven grooming detection, and mandatory age verification raises profound privacy, security, and human‑rights concerns that extend far beyond the EU’s borders.

Back to Blog

Related posts

Read more »