YouTube expands AI deepfake detection to politicians, government officials, and journalists
Source: TechCrunch
Announcement
YouTube is expanding its likeness detection technology, which identifies AI‑generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI‑generated content and lets them request its removal if they believe it violates YouTube policy.
Background
The technology launched last year — see the original rollout to roughly 4 million YouTube creators in the YouTube Partner Program — after earlier tests with a smaller set of creators.
- Launch announcement: TechCrunch, Oct 21 2025
- Earlier test phase: TechCrunch, Apr 9 2025
Similar to YouTube’s existing Content ID system, which detects copyright‑protected material, the likeness detection feature looks for simulated faces created with AI tools. These tools can be used to spread misinformation by making notable figures—politicians, government officials, journalists—appear to say or do things they never did.
Pilot Program Details
- Eligibility: Government officials, political candidates, and journalists who are invited to the pilot.
- Verification: Participants must upload a selfie and a government ID to prove their identity.
- Functionality: After verification, users can create a profile, view detected matches, and optionally request removal of the offending videos.
- Removal Process: Requests are evaluated under YouTube’s existing privacy‑policy guidelines to determine whether the content is parody, political critique, or another protected form of expression.
“This expansion is really about the integrity of the public conversation,” said Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy. “We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.”

Image Credits: YouTube
Policy and Legal Context
YouTube supports the federal NO FAKES Act (S.1367, 119th Congress), which would regulate the use of AI to create unauthorized recreations of an individual’s voice and visual likeness.
Labeling and Content Presentation
AI‑generated videos are labeled as such, but the placement of the label varies:
- For many videos, the label appears in the description.
- For content covering “sensitive topics,” the label is placed at the front of the video.
“There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,” explained Amjad Hanif, YouTube’s Vice President of Creator Products. “It could be a cartoon that is generated with AI. And so I think there’s a judgment on whether it’s a category that maybe merits a very visible disclaimer.”
Current Impact and Future Plans
YouTube has not disclosed the exact number of removal requests processed for deepfakes, but notes that the volume has been “very small.” Most requests have turned out to be benign or additive to creators’ businesses.
The company intends to broaden the technology over time, eventually covering:
- Recognizable spoken voices.
- Other intellectual property, such as popular characters.
- Potential pre‑upload blocking or monetization options similar to Content ID.

Image Credits: YouTube