Meta urged to boost oversight of fake AI videos

Published: (March 10, 2026 at 05:20 PM EDT)
3 min read

Source: BBC Technology

Reuters: Mark Zuckerberg walking down the grey stone steps of a Los Angeles courthouse wearing a navy suit blazer, grey neck tie and white shirt.

Oversight Board calls for stronger AI content policies

The 21‑person Meta Oversight Board raised concerns that the company is not doing enough to address the “proliferation” of fake content created with artificial‑intelligence (AI) tools on its platforms.

The board rebuked Meta for leaving up an AI‑generated video that claimed extensive damage in Haifa, Israel by Iranian forces without a label. It called on the company to overhaul its AI rules, warning that an increase in fake AI videos related to global military conflicts has “challenged the public’s ability to distinguish fabrication from fact … risking a general distrust of all information.”

Meta’s response

Meta said it would label the video in question within seven days. In a statement, the company said it would abide by the board’s suggestions the next time it encounters “identical” content in the same context as the reviewed video.

How Meta currently handles AI‑generated content

  • Meta relies largely on users to “self‑disclose” when content they post is produced by an AI tool.
  • Otherwise, the platform waits for a complaint to its content‑moderation team, which may then decide to affix a label.

The board argued that this approach is “neither robust nor comprehensive enough to contend with the scale and velocity of AI‑generated content, particularly during a crisis or conflict where there is heightened engagement on the platform.” It recommended that Meta proactively label fake AI content much more frequently.

Background on the Haifa video

The board’s review was sparked by a video posted in June by a Facebook account based in the Philippines that described itself as a news source. The video was part of a string of fake AI videos posted to social media after the conflict began, with content either pro‑Israel or pro‑Iran, and quickly amassed at least 100 million views, according to a BBC analysis at the time.

Although the video was AI‑generated and did not depict real events, Meta received several user complaints but did not label it as AI‑generated nor remove it. It was only after a Facebook user appealed directly to the Oversight Board that Meta responded. The company claimed the video, which garnered almost 1 million views, did not require a label and did not need to be taken down because it did not “directly contribute to the risk of imminent physical harm.”

The board deemed this standard “too high” for labeling AI‑generated content, especially when the subject is armed conflict, and ruled that the video should have received a “high‑risk AI label.”

Board’s key recommendation

“Meta must do more to address the proliferation of deceptive AI‑generated content on its platforms… so that users can distinguish between what is real and fake.”

The board urged Meta to adopt more robust labeling practices and to ensure that its policies can keep pace with the rapid creation and spread of AI‑generated media, particularly in contexts of armed conflict.

0 views
Back to Blog

Related posts

Read more »