
The central conflict is rooted in artists’ rights and compensation. Unlike past technological disruptions that merely changed how music was consumed, AI threatens to devalue the input of human creators by using their work as training data without permission or remuneration.
This fear has mobilized some of the biggest names in music. In February 2025, over a thousand artists, including Kate Bush and Damon Albarn, released a collective “silent album” titled Is This What We Want? The album, which featured recordings of empty studios, was a symbolic protest against the UK government’s proposed “opt-out” copyright system.Critics argue this system reverses the fundamental principle of copyright, essentially legalizing the use of protected work for commercial AI training unless creators can actively and individually block it.
The protest escalated in April, when over 200 prominent musicians, including Billie Eilish, R.E.M., and Stevie Wonder, signed an open letter organized by the Artists’ Rights Alliance. Their demand was clear: tech companies must stop the “predatory use of AI to steal professional artists’ voices and likenesses” and pledge not to deploy tools that replace human artistry or deny fair compensation.
This modern dilemma is often juxtaposed against the history of sampling, a technique long embedded in music’s DNA. From the Sugarhill Gang’s Rapper’s Delight to the up to a quarter of all songs currently on the Billboard Hot 100 that utilize sampling, the creative reuse of sound has always driven new genres and artistic evolution. Sampling, however, is a process of creative interpretation and generally requires licensing and payment—a model that respects the original creators.
The fight against unchecked AI usage is therefore less about resisting technology and more about securing a fair value chain. For musicians, the survival of music in the AI era hinges entirely on whether copyright law can adapt quickly enough to protect the creators who built the catalog AI relies upon.