CopyFight: The Battle Between Artists and AI
Artists create, AI replicates—who gets paid?
Imagine spending months crafting a song — writing, rewriting, recording take after take, refining the production until it’s just right. You release it to the world, proud of your work. Then, a few weeks later, an AI company has fed your track into its model, trained to generate a deepfake of your voice. The system is now offering it to wide-eyed customers who can generate their own performances of your voice, without your consent. You get nothing. Not even a penny, not even a thank you.
This is exactly what Jammable did in 2024. The AI start-up faced claims from the British Phonographic Industry for ‘scraping’ copyright protected songs to train its AI models and generate covers that allow its customers to deepfakes artists voices, such as Drake, Rihanna, and Taylor Swift.
This is problematic because it exploits the work of creators and totally disregards the very point of copyright protection; copyright gives an owner the exclusive right to control how their work is used, preventing others from copying or distributing it without permission, thereby encouraging creativity and innovation by providing economic incentives for creators to produce new works.
In 2023, the UK music industry added £7.6 billion to the economy. Music is an integral part of the UK’s culture and heritage, and as a cultural powerhouse, we must protect the artists who have and continue to define it. Many artists rely heavily on royalties, yet AI companies exploit their work without consent or payment. To do so is, in the eyes of many, theft. Stripping artists of income stifles creativity and threatens the future of British music. If left unchecked, this could lead to a landscape where AI-generated content overshadows human artistry, diluting the originality and cultural significance that has defined the UK’s music scene for decades.
On the flipside, in the defence of AI start-ups, high-quality input data is essential for generating high-quality output. Without access to a wide array of good quality data to train on, these cutting-edge technologies and businesses become held back and run the risk of failure.
Many AI start-ups argue that our restrictive copyright laws are stifling innovation, making it difficult for smaller players to compete with industry giants who have the resources to negotiate licensing deals. They see AI as a tool for democratising creativity, enabling independent artists and producers to access technology that was previously out of reach.
Had Jammable complied with copyright laws, licensing every piece of work used for AI training would have been both enormously time-consuming and expensive, and probably a goal not even worthy of contemplating. Convincing and negotiating with record labels and publishers to license the works would realistically take months, if not years, and the cost of doing so would take the business to the grave.
AI is the future, and we must accept that it will become an integral part of our lives within the next decade. Ignoring it isn’t an option — we must adapt and embrace the change. Therefore, is it not time that we made some adjustments to age-old copyright laws and allowed innovation and technology to pave the way for the future?
This issue seems to present us with a double-edged sword: do we let the tech-bros get a head-start in the technological revolution and turn a blind eye to the creatives at stake, or do we maintain the values of creativity that brought about copyright protections in the first place, while lagging behind the rest of the world in technological developments?
This is exactly what the UK government is discussing…
The UK’s Copyright Consultation: What’s the Fix?
To balance these competing priorities, the UK Government launched a consultation to address the legal grey area of AI training data. Their proposed solution? A new Text and Data Mining (TDM) Copyright Exception.
Sounds fancy, but what does it actually mean? Essentially, rightsholders (i.e. artists, labels, publishers) would have the chance to “opt out” of having their work used for training purposes — meaning they’d have to actively say, “Hey, don’t use my work”, rather than the current system where AI companies take first and ask for forgiveness later.
For rightsholders, this proposed opt-out model presents both opportunities and challenges. On one hand, it introduces greater transparency by requiring AI companies to disclose their data sources. This could give artists, labels, and publishers more control over their work and the ability to actively prevent its use in AI training. However, the burden of opting out would realistically fall on the rightsholders themselves, meaning they must be vigilant in tracking AI companies and asserting their rights.
Additionally, this model does not automatically ensure compensation for those whose works have already been used to train AI. While greater transparency is a step forward, many artists argue that a more robust system should be implemented — one that enforces licensing agreements and guarantees royalties for rightsholders whose content is used to build these AI models.
Looking Ahead… What Are Creators Doing to Protect Themselves?
This is all rather daunting, and whichever way it goes, some will be let down. But there is some sign of resilience.
In 2023, Believe announced the launch of its internal AI detector called AI Radar, capable of determining if a piece of music has been created by artificial intelligence. With an accuracy rate of 98% for AI-generated recordings and 93% for deep fakes, this tool aims to address growing concerns about copyright in the music industry. It represents a significant advancement in protecting artists and maintaining the integrity of musical content in an increasingly AI-driven era.
Faced with this rapidly evolving landscape, artists and labels are starting to fight back. Some musicians are proactively licensing their voices to AI companies — on their terms. Instead of having their voices cloned without permission, they’re striking deals where they get a share of the profits when their digital doubles are used.
Labels, too, are becoming more aggressive in tackling AI misuse. Universal Music Group has reportedly been working behind the scenes to prevent its artists’ music from being scraped for AI training. They have issued takedown notices to AI platforms that generate songs mimicking their artists and have lobbied for stronger legal protections against unauthorised AI use.
UMG has also pushed streaming services like Spotify and Apple Music to prevent AI-generated tracks from flooding their platforms, arguing that such content dilutes the value of legitimate music. Meanwhile, some artists have gone legal, filing lawsuits against AI platforms that generate music in their style without permission. The battle lines are being drawn, and in the next few years, we’ll likely see landmark cases that define how AI interacts with the music industry.
The AI Copyright Battle Is Just Beginning
This is just the beginning of the AI vs. Copyright battle. While AI models are getting smarter, the legal and ethical questions surrounding them are getting messier. The music industry is already fighting an uphill battle against streaming’s payment structures—now, it’s got AI scraping data like a digital vulture.
The UK consultation could be a step towards a fairer system, but the key question remains: Will the law actually keep up with the speed of AI development? Or will artists keep getting exploited under the guise of ‘progress’?