Should fair use allow AI to be trained on copyrighted music?

Dom Aversano

This week the composer Ed Newton-Rex brought the ethics of AI into focus when he resigned from his role in the Audio team at Stability AI, citing a disagreement with the fair use argument used by his ex-employer to justify training its generative AI models on copyrighted works.

In a statement posted on Twitter/X he explained the reasons for his resignation.

For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment. This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with.
I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is “the effect of the use upon the potential market for or value of the copyrighted work”. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use.

As Newton-Rex states, this is quite a standard argument made by companies using copyright material to train their AI. In fact, Stability AI recently submitted a 23-page document to the US Copyright Office arguing their case. Within it, they state they have trained their Stable Audio model on ‘800,000 recordings and corresponding songs’ going on to state.

These models analyze vast datasets to understand the relationships between words, concepts, and visual, textual or musical features ~ much like a student visiting a library or an art gallery. Models can then apply this knowledge to help a user produce new content, This learning process is known as training.

This highly anthropomorphised argument is at least very questionable. AI models are not like students for obvious reasons: they do not have a body, do not have emotions, and have no life experience. Furthermore, as Stability AI’s own document testifies, they do not learn in the same way that humans learn; if a student were to study 800,000 pieces of music over a ten-year period that would require analysing 219 different songs a day.

The contrast in how humans learn and think was highlighted by the American linguist and cognitive scientist Noam Chomsky in his critique of Large Language Models (LLMs).

The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.

A lot of this issue is further complicated by the language emerging from the AI community, which varies from anthropomorphic (‘co-pilot’) to deistic (‘godlike’) to apocalyptic (‘breakout scenarios’). Specifically with Stability AI, the company awkwardly evokes Abraham Lincon’s Gettysburg Address when writing on their website that they are creating ‘AI by the people for the people’ with the ambition of ‘building the foundation to activate humanity’s potential’.

While of course, they are materially different circumstances there is nevertheless a certain echo here of the civilising mission used to morally rationalise the economic rapaciousness of empire. To justify the permissionless use of copyrighted artwork on the basis of a mission to ‘activate humanity’s potential’ in a project ‘for the people’ is excessively moralistic and unconvincing. If Stability AI wants their project to be ‘by the people’ they should have artists explicitly opt-in before using their work, but the problem with this is that many will not, rendering the models perhaps not useless, but greatly less effective.

This point was underscored by venture capital fund Andreessen Horowitz who recently released a rather candid statement to this effect.

The bottom line is this: imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development.

Although in principle supportive of generative AI Newton-Rex does not ignore the economic realities behind the development of AI. In a statement that I will finish with, he succinctly and eloquently brings into focus the power imbalance at play and its potential destructiveness

Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.

If you have an opinion you would like to share on this topic please feel free to comment below.

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
About
Privacy

0
Would love your thoughts, please comment.x
()
x