Great exploitations
The government needs to introduce tighter regulations to protect authors' copyright against AI, argues Anna Ganley
This year is the national year of reading. Its aim: to change the national reading culture for good. It is a bold and necessary move given the steep decline in long-form reading amongst young people, which is already manifesting as a sharp fall in adulthood reading rates.
This is only one chapter in the story of the gradual erosion of literary culture through the dominance of digital platforms, inconsistent library provision, and declining author earnings. If action is not taken now, the next chapter will tell a story of ever-expanding use of AI-generated content, which is prone to errors and bias and which is based on the unauthorised – and, I would contend, illegal – use of copyrighted materials.
Authors are not against generative AI as a tool; they are against their work being stolen. In the US, we’ve already seen multiple lawsuits, like the Bartz v. Anthropic AI case, which allege the industrial-scale use of authors’ copyright protected works without permission or payment. Similar legal actions could soon reach the UK courts. Yet authors and rightsholders should not be in this position. Without government support, the onus is on the individual to seek costly, time-consuming legal redress against tech giants equipped with slick legal teams.
If legal action is the only route to seek fairness and justice, authors will take the fight to big tech. Yet this is a sucker punch for authors already facing precarious careers and low incomes. It is also a step backwards in terms of diversity and the much-needed plurality of voices within the publishing ecosystem. As one of government’s eight growth sectors, the creative industries need government action as a matter of urgency.
The AI explosion represents a triple blow for authors. First, their works have been pirated and hosted on shadow libraries online. Second, tech companies have trained their large language models on copyright-protected works. Third, these models generate derivative works ‘in the style of’ the original works. As there is currently no mandatory requirement for transparency when it comes to AI training data, authors and other creators don’t know whether or how their works have been used, which means it’s almost impossible to seek redress.
Earlier this year, we called out the tech giant, Meta, for their alleged use of pirated content from the ‘Lib Gen’ shadow library to train its Llama 3 large language model. Along with the Creators’ Rights Alliance, we wrote to over 70 tech companies to ask them to train their systems the right and ethical way, but we got very few responses. Those companies that did respond said a licence wasn’t required on the basis of ‘fair use’. We disagree, because both ‘fair use’ and ‘fair dealing’ (the UK’s version) can only be used in a handful of specific, non-commercial cases. Ninety-three per cent of our members have said that generative AI presents an existential threat to their profession. In the last 16 years, there has been a 60 per cent decrease in author earnings and, in our recent member survey, 86 per cent of authors told us that their earnings have been affected by generative AI.
Our literary translators are the canaries in the coalmine. They are swiftly seeing their livelihoods decimated, and are now being asked to correct machine-translated texts rather than translating from scratch. Literary translation is not simply the verbatim change from one language to another; it’s the human retelling of a story into a different language, complete with all the nuances that a machine cannot detect, including changes in idiom and cultural references.
It is not just writers and translators who are affected. Itis also illustrators, script writers, audio narrators, voiceover artists, actors and photographers – all of whom are seeing their work, voices, faces and likenesses being taken without permission or payment. Over a third of our illustrators have already lost commissions to AI-generated content, with losses across the industry of around £9,262 per creator. What we need from government is simple: a regulatory framework that includes the mandatory disclosure of the specific materials used to train generative AI models, with auditable records of how personal data and copyright-protected data has been collected and used. We need transparency around how generative AI systems operate and make decisions. We need clear labelling of AI-generated outputs, and international consistency, because this is a global technology.
Authors and creators are the cornerstone of the creative industries, and without original content, generative AI models have nothing to be trained on. So, hear our cry: we desperately need transparency and regulation to prevent tech companies from acting above the law. The UK has a gold standard copyright framework. We don’t need any changes to it. What we need is for big tech to act within the law. And we need government’s help to achieve this.
Image credit: William Fortunato via pexels

