
Spotify’s new strategy focuses on creating a sustainable ecosystem by supporting legitimate creative experimentation while aggressively policing fraudulent activity. As VP and Global Head of Music Product Charlie Hellman noted, the goal is not “to punish artists for using AI authentically and responsibly,” but “to stop the bad actors who are gaming the system.”
The comprehensive approach targets three key areas of platform abuse:
Impersonation and Deepfakes: Spotify is implementing a stricter policy specifically addressing unauthorized vocal impersonation. Music utilizing AI-generated voice clones or “deepfakes” of artists will be removed unless the original artist has explicitly authorized the usage. This measure is designed to protect artist identity and ensure control over their likeness remains firmly in their hands.
The New Spam Filter: To combat the influx of low-quality, high-volume content—often termed “AI slop”—a new music spam filter will be rolled out this fall. This system will target uploaders employing fraudulent tactics such as mass uploads, using SEO hacks, and creating artificially short tracks intended to fraudulently accumulate royalty-bearing streams.
Transparency via AI Disclosures: Recognizing that AI can be a legitimate creative tool, Spotify is also working with industry partners, including DDEX, to develop a new standard for transparency in song credits. This will allow artists and rights holders to clearly disclose where and how AI was used in a track’s creation (e.g., for vocals, instrumentation, or mixing), fostering trust among listeners and ensuring a nuanced approach to AI-assisted music.
By taking these decisive actions, Spotify is attempting to set a clear precedent in the streaming industry, ensuring that the potential of Gen AI enhances—rather than undermines—the careers of authentic artists and the integrity of the platform.