A Q&A with AI regulator Ed Newton-Rex

Dom Aversano

Ed Newton-Rex - photo by Jinnan Wang

In November last year, Ed Newton-Rex, the head of audio at Stability AI, left the company citing a small but significant difference in his philosophy towards training large language models (LLMs). Stability AI was one of several companies that responded to an invitation from the US Copyright Office for comments on generative AI and copyright, submitting an argument that training their models on copyrighted artistic works fell under the definition of fair use: a law which permits the use of copyrighted works for a limited number of purposes, one of which is education. This argument has been pushed by the AI industry more widely, who contest that much like a student who learns to compose music by studying renowned composers, their machine learning algorithms are conducting a similar learning process.

Newton-Rex did not buy the industry’s arguments, and while you can read his full arguments for resigning in his X/Twitter post, central to his argument was the following passage:

(…) since ‘fair use’ wasn’t designed with generative AI in mind — training generative AI models in this way is, to me, wrong. Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.

It is important to make clear that Newton-Rex is not a critic of AI; he is an enthusiast who has worked in the machine learning field for more than a decade; his contention is narrowly focused on the ethics surrounding the training of AI models.

Newton-Rex’s response to this was to set up a non-profit called Fairly Trained, which awards certificates to AI companies whose training data they consider ethical.

Their mission statement contains the following passage:

There is a divide emerging between two types of generative AI companies: those who get the consent of training data providers, and those who don’t, claiming they have no legal obligation to do so.

In an attempt to gain a better understanding of Newton-Rex’s thinking on this subject, I conducted a Q&A by email. Perhaps the most revealing admission is that Newton-Rex desires to eliminate his company. What follows is the unedited text. 

Fairly Trained is a non-profit founded by Ed Newton-Rex that award certificates to AI companies who train their models in a manner that is deemed ethical.

Do you think generative artificial intelligence is an accurate description of the technology Fairly Trained certifies?

Yes!

Having worked inside Stability AI and the machine learning community, can you provide a sense of the culture and the degree to which the companies consider artists’ concerns?

I certainly think generative AI companies are aware of and consider artists’ concerns. But I think we need to measure companies by their actions. In my view, if a company trains generative AI models on artists’ work without permission, in order to create a product that can compete with those artists, it doesn’t matter whether or not they’re considering artists’ concerns – through their actions, they’re exploiting artists.

Many LLM companies present a fair use argument that compares machine learning to a student learning. Could you describe why you disagree with this?

I think the fair use argument and the student learning arguments are different.

I don’t think generative AI training falls under the fair use copyright exception because one of the factors that is taken into account when assessing whether a copy is a fair use is the effect of the copy on the potential market for, and value of, the work that is copied. Generative AI involves copying during the training stage, and it’s clear that many generative AI models can and do compete with the work they’re trained on.

I don’t think we should treat machine learning the same as human learning for two reasons. First, AI scales in a way no human can: if you train an AI model on all the production music in the world, that model will be able to replace the demand for pretty much all of that music. No human can do this. Second, humans create within an implicit social contract – they know that people will learn from their work. This is priced in, and has been for hundreds of years. We don’t create work with the understanding that billion-dollar corporations will use it to build products that compete with us. This sits outside of the long-established social contract. 

Do you think that legislators around the world are moving quickly enough to protect the rights of artists?

No. We need legislators to move faster. On current timetables, there is a serious risk that any solutions – such as enforcing existing copyright law, requiring companies to reveal their training data, etc. – will be too late, and these tools will be so widespread that it will be very hard to roll them back.

At Fairly Trained you provide a certification that signifies that a company trains their models on ‘data provided with the consent of its creators’. How do you acquire an accurate and transparent knowledge of the data each company is using?

They share their data with us confidentially.

For Fairly Trained to be successful it must earn people’s trust. What makes your organisation trustworthy?

We are a non-profit, and we have no financial backing from anyone on either side of this debate (or anyone at all, in fact). We have no hidden motives and no vested interests. I hope that makes us trustworthy.

If your ideal legislation existed, would a company like Fairly Trained be necessary? 

No, Fairly Trained would not be necessary. I very much hope to be able to close it down one day!

To learn more about what you have read in this article you can visit the Fairly Trained website or Ed Newton-Rex’s website

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at the Liner Notes.

About
Privacy