Credit: FotoField/Shutterstock
In a major move to curb the spread of unauthorized AI content, YouTube has officially expanded its Likeness Detection tool to the wider entertainment industry. Starting today, talent agencies, management companies, and the celebrities they represent are eligible to enroll in the system, even if they do not personally maintain a YouTube channel. This rollout, developed in partnership with industry giants like CAA, UTA, WME, and Untitled Management, signals a shift in how platforms manage the “commercial and reputational harm” caused by viral deepfakes.
The system is being hailed as the “Content ID for Faces.” Just as rights holders use Content ID to track music and film clips, Likeness Detection performs a one-time scan of every new upload to the platform to identify synthetic or AI-altered versions of an enrolled person’s face. To sign up, celebrities must provide a government-issued ID and a brief selfie video, which YouTube uses to build a unique facial likeness template. Once flagged, an artist’s management team can review the content and request immediate removal if it violates the platform’s privacy or synthetic media policies.
While the current version of the tool is focused on visual matches, the “Voice” era of protection is fast approaching. YouTube has confirmed it is currently “tuning the software” to extend Likeness Detection to audio and singing voices later in 2026. This is a direct response to the “ghost tracks” and AI-impersonation songs that have plagued the music industry over the past year—including the infamous “fake Drake” incident and Sony Music’s recent request to purge over 135,000 AI tracks from various platforms.
YouTube emphasized that while the tool is a powerful shield against deceptive endorsements (like the viral AI-generated Tom Hanks diabetes ad), it is not a “blunt takedown” mechanism. The platform will continue to protect parody and satire, even when it features public figures. The verification data is strictly used for identification and, critically, is not used to train Google’s generative AI models. As the “NO FAKES Act” gains traction in Washington, YouTube’s expanded rollout positions the platform as a key player in the legal framework for protecting human identity in a synthetic age.
