The Oversight Board says Meta needs new rules for AI-generated content

Published: (March 10, 2026 at 06:00 AM EDT)
3 min read
Source: Engadget

Source: Engadget

Background

The Oversight Board is urging Meta to overhaul its rules around AI‑generated content. The board’s latest recommendations follow an AI‑generated video shared last year that claimed to show damaged buildings in Haifa during the Israel‑Iran conflict in 2025. The clip amassed more than 700,000 views and was posted by an account posing as a news outlet, but actually run from the Philippines.

After the video was reported, Meta declined to remove it or add a “high risk” AI label that would have indicated the content had been created or manipulated with AI. The board overturned Meta’s decision not to add the label, noting that the case highlights several shortcomings in the company’s current AI policies.

“Meta must do more to address the proliferation of deceptive AI‑generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake,” the board wrote. Meta eventually disabled three accounts linked to the page after the board flagged “obvious signals of deception.”

Board Recommendations

Dedicated AI‑Content Rule

  • Create a separate rule for AI‑generated content, distinct from the misinformation policy.
  • Specify when and how users must label AI content.
  • Outline penalties for violations of the rule.

Improvements to Labeling

  • Revise the current “AI Info” labels, which the board says are “neither robust nor comprehensive enough” to handle the scale and velocity of AI‑generated content, especially during conflicts or crises.
  • Reduce reliance on self‑disclosure and infrequent escalated review.

Detection Technology

  • Invest in more sophisticated detection tools capable of reliably labeling AI media, including audio and video.
  • Ensure consistent implementation of digital watermarks on AI content created by Meta’s own tools.

Internal Assessment

  • Reduce dependence on third‑party fact‑checkers and “trusted partners.”
  • Build internal capacity to assess harm, especially during armed conflicts.

Meta’s Response

Meta did not immediately comment on the Oversight Board’s decision. The company has 60 days to formally respond to the board’s recommendations.

Broader Context

The board’s decision is not the first criticism of Meta’s handling of AI content. It has previously described Meta’s manipulated‑media rules as “incoherent” on two occasions:

The issue has gained urgency amid the latest Middle‑East conflict. Since the start of the U.S. and Israel strikes on Iran earlier this month, there has been a sharp rise in viral AI‑generated misinformation across social media (Rolling Stone coverage).

The board also hinted at broader industry collaboration, suggesting that “the industry needs coherence in helping users distinguish deceptive AI‑generated content and platforms should address abusive accounts and pages sharing such output.”

0 views
Back to Blog

Related posts

Read more »