A Q&A with AI regulator Ed Newton-Rex

Dom Aversano

Ed Newton-Rex - photo by Jinnan Wang

In November last year, Ed Newton-Rex, the head of audio at Stability AI, left the company citing a small but significant difference in his philosophy towards training large language models (LLMs). Stability AI was one of several companies that responded to an invitation from the US Copyright Office for comments on generative AI and copyright, submitting an argument that training their models on copyrighted artistic works fell under the definition of fair use: a law which permits the use of copyrighted works for a limited number of purposes, one of which is education. This argument has been pushed by the AI industry more widely, who contest that much like a student who learns to compose music by studying renowned composers, their machine learning algorithms are conducting a similar learning process.

Newton-Rex did not buy the industry’s arguments, and while you can read his full arguments for resigning in his X/Twitter post, central to his argument was the following passage:

(…) since ‘fair use’ wasn’t designed with generative AI in mind — training generative AI models in this way is, to me, wrong. Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.

It is important to make clear that Newton-Rex is not a critic of AI; he is an enthusiast who has worked in the machine learning field for more than a decade; his contention is narrowly focused on the ethics surrounding the training of AI models.

Newton-Rex’s response to this was to set up a non-profit called Fairly Trained, which awards certificates to AI companies whose training data they consider ethical.

Their mission statement contains the following passage:

There is a divide emerging between two types of generative AI companies: those who get the consent of training data providers, and those who don’t, claiming they have no legal obligation to do so.

In an attempt to gain a better understanding of Newton-Rex’s thinking on this subject, I conducted a Q&A by email. Perhaps the most revealing admission is that Newton-Rex desires to eliminate his company. What follows is the unedited text. 

Fairly Trained is a non-profit founded by Ed Newton-Rex that award certificates to AI companies who train their models in a manner that is deemed ethical.

Do you think generative artificial intelligence is an accurate description of the technology Fairly Trained certifies?

Yes!

Having worked inside Stability AI and the machine learning community, can you provide a sense of the culture and the degree to which the companies consider artists’ concerns?

I certainly think generative AI companies are aware of and consider artists’ concerns. But I think we need to measure companies by their actions. In my view, if a company trains generative AI models on artists’ work without permission, in order to create a product that can compete with those artists, it doesn’t matter whether or not they’re considering artists’ concerns – through their actions, they’re exploiting artists.

Many LLM companies present a fair use argument that compares machine learning to a student learning. Could you describe why you disagree with this?

I think the fair use argument and the student learning arguments are different.

I don’t think generative AI training falls under the fair use copyright exception because one of the factors that is taken into account when assessing whether a copy is a fair use is the effect of the copy on the potential market for, and value of, the work that is copied. Generative AI involves copying during the training stage, and it’s clear that many generative AI models can and do compete with the work they’re trained on.

I don’t think we should treat machine learning the same as human learning for two reasons. First, AI scales in a way no human can: if you train an AI model on all the production music in the world, that model will be able to replace the demand for pretty much all of that music. No human can do this. Second, humans create within an implicit social contract – they know that people will learn from their work. This is priced in, and has been for hundreds of years. We don’t create work with the understanding that billion-dollar corporations will use it to build products that compete with us. This sits outside of the long-established social contract. 

Do you think that legislators around the world are moving quickly enough to protect the rights of artists?

No. We need legislators to move faster. On current timetables, there is a serious risk that any solutions – such as enforcing existing copyright law, requiring companies to reveal their training data, etc. – will be too late, and these tools will be so widespread that it will be very hard to roll them back.

At Fairly Trained you provide a certification that signifies that a company trains their models on ‘data provided with the consent of its creators’. How do you acquire an accurate and transparent knowledge of the data each company is using?

They share their data with us confidentially.

For Fairly Trained to be successful it must earn people’s trust. What makes your organisation trustworthy?

We are a non-profit, and we have no financial backing from anyone on either side of this debate (or anyone at all, in fact). We have no hidden motives and no vested interests. I hope that makes us trustworthy.

If your ideal legislation existed, would a company like Fairly Trained be necessary? 

No, Fairly Trained would not be necessary. I very much hope to be able to close it down one day!

To learn more about what you have read in this article you can visit the Fairly Trained website or Ed Newton-Rex’s website

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at the Liner Notes.

Music in the browser or app?

Dom Aversano

As The Bard famously put it, ‘The app, or the browser, that is the question.’

At some point, your inspirational idea for digital music will have to travel from the platonic realm of your thoughts, into either an app or browser. Unless you can luxuriate in doing both, this represents a stark choice. The most appropriate choice depends on weighing up the advantages and disadvantages of both. The graphic above is designed to help categorise what you are creating, thereby providing a better sense of its ideal home.

The most traditional category is recorded music, as it predates the proliferation and miniaturisation of personal computing. In the 20th Century, radio transformed music, and then television transformed it again. In this regard, Spotify and YouTube are quite traditional, as the former imitates radio while the latter mimics TV. This might help explain why Spotify is almost entirely an app, sitting in the background like a radio, and YouTube is most commonly used in the browser, fixing your gaze as if it were a TV. Whether a person is likely to be tethered to a computer or walking around with a phone, may help in deciding between browsers and apps.

Turning to generative music, a successful example of this in the browser is Generative FM, created by Alex Bainter, which hosts more than 50 generative music compositions that you can easily dip into. It is funded by donations, as well as an online course on designing generative systems. The compositions are interesting, varied, and engaging, but as a platform it’s easy to tune out of it. This might be because we are not in the habit of listening to music in the browser without a visual component. The sustainability of this method is also questionable since, despite there still being a good number of daily listeners, the project appears to have been somewhat abandoned, with the last composition having been uploaded in 2021.

Perhaps Generative FM was more suited to an app form, and there are many examples of projects that have chosen this medium. Artists such as Bjork, Brian Eno, and Jean-Michel Jarre have released music as apps. There are obvious benefits to this, such as the fact that an app feels more like a thing than a web page, as well as the commitment that comes from installing an app, especially one you have paid for — in the case of Brian Eno’s generative Reflection app, it comes at the not inconsiderable costs £29.99.

Yet, more than a decade since Bjork released her app Biophilia, the medium is still exceedingly niche and struggling to become established. Bjork has not released any apps since Biophilia, which would have been time-consuming and expensive to create. Despite Bjork’s app not having beckoned in a new digital era for music, this may be a case of a false start rather than a nonstarter. As app building gets easier and more people learn to program, there may be a breakthrough artist who creates a new form of digital music that captures people’s imaginations.

To turn the attention to music-making, and music programming in particular, there is a much clearer migratory pattern. Javascript has allowed programming language to work seamlessly in the browser. In graphical languages, this has led to P5JS superseding Processing. In music programming languages Strudel looks likely to supersede TidalCycles. Of the many ways in which having a programming language in the browser is helpful, one of the greatest is that it allows group workshops to run much more smoothly, removing the tedium and delays caused by faulty software. If you have not yet tried Strudel, it’s worth having a go, as you can get started with music-making in minutes by running and editing some of its patches.

The final category of AI — or large language models — is the hardest to evaluate. Since there is massive investment in this technology, most of the major companies are building their software for both browsers and apps. Given the gold rush mentality, there is a strong incentive to get people to open up a browser and start using the software as quickly as possible. Suno is an example of this, where you can listen to music produced with it instantly. If you sign it only takes a couple of clicks and a prompt to generate a song. However, given the huge running costs of training LLMs, this culture of openness will likely reduce in the coming years, as the companies seek to recuperate their backers’ money.

The question of whether to build something for the browser or an app is not a simple one. As technology offers us increasingly large numbers of possibilities, it becomes more difficult to choose the ideal one. However, the benefit of this huge array of options is that we have the potential to invent new ways of creating and presenting music that may not yet have been imagined, whether that’s in an app or browser.

Please feel free to share your thoughts and insights on creating for the browser or apps in the comment section below!

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.

Book Review: Supercollider for the Creative Musician

Dom Aversano

Supercollider for the creative musician.

Several years ago a professor of electronic music at a London University advised me not to learn Supercollider as it was ‘too much of a headache’ and it would be better just to learn Max. I nevertheless took a weekend course, but not long after my enthusiasm for the language petered out. I did not have the time to devote to learning and was put off by Supercollider’s patchy documentation. It felt like a programming language for experienced programmers more than an approachable tool for musicians and composers. So instead I learned Pure Data, working with that until I reached a point where my ideas diverged from anything that resembled patching cords, at which point, I knew I needed to give Supercollider a second chance.

A lot had changed in the ensuing years, and not least of all with the emergence of Eli Fieldsteel’s excellent YouTube tutorials. Eli did for SuperCollider what Daniel Shiffman did for Processing/P5JS by making the language accessible and approachable to someone with no previous programming experience. Just read the comments for Eli’s videos and you’ll find glowing praise for their clarity and organisation. This might not come as a complete surprise as he is an associate professor of composition at the University of Illinois. In addition to his teaching abilities, Eli’s sound design and composition skills are right up there. His tutorial example code involves usable sounds, rather than simply abstract archetypes of various synthesis and sampling techniques. When I heard Eli was publishing a book I was excited to experience his teaching practice through a new medium, and curious to know how he would approach this.

The title of the book ‘SuperCollider for the Creative Musician: A Practical Guide’ does not give a great deal away, and is somewhat tautological. The book is divided into three sections: Fundamentals, Creative Techniques, and Large-Scale Projects.

The Fundamentals section is the best-written introduction to the language yet. The language is broken down into its elements and explained with clarity and precision making it perfectly suited for a beginner, or as a refresher for people who might not have used the language in a while. In a sense, this section represents the concise manual Supercollider has always lacked. For programmers with more experience, it might clarify the basics but not represent any real challenge or introduce new ideas.

The second section, Creative Techniques, is more advanced. Familiar topics like synthesis, sampling, and sequencing, are covered, as well as more neglected topics such as GUI design. There are plenty of diagrams, code examples, and helpful tips that anyone would benefit from to improve their sound design and programming skills. The code is clear, readable, and well-crafted, in a manner that encourages a structured and interactive form of learning and makes for a good reference book. At this point, the book could have dissembled into vagueness and structural incoherence, but it holds together sharply.

The final section, Large-Scale Projects, is the most esoteric and advanced. Its focus is project designs that are event-based, state-based, or live-coded. Here Eli steps into a more philosophical and compositional terrain, showcasing the possibilities that coding environments offer, such as non-linear and generative composition. This short and dense section covers the topics well, providing insights into Eli’s idiosyncratic approach to coding and composition.

Overall, it is an excellent book that every Supercollider should own. It is clearer and more focused than The Supercollider Book, which with multiple authours is fascinating, but makes it less suitable for a beginner. Eli’s book makes the language feel friendlier and more approachable. The ideal would be to own both, but given a choice, I would recommend Eli’s as the best standard introduction.

My one criticism — if it is a criticism at all — is that I was hoping for something more personal to the authour’s style and composing practice, whereas this is perhaps closer to a learning guide or highly-sophisticated manual. Given the aforementioned lack of this in the Supercollider community Eli has done the right thing to opt to plug this hole. However, I hope that this represents the first book in a series in which he delves deeper into Supercollider and his unique approach to composition and sound design.

 

 

Eli Fieldsteel - authour of Supercollider for the Creative Musician
Eli Fieldsteel / authour of Supercollider for the Creative Musician

Click here to order a copy of Supercollider for the Creative Musician: A Practical Guide

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at the Liner Notes.

An interview with Blockhead creator Chris Penrose

Dom Aversano

A screenshot from Blockhead

Blockhead is an unusual sequencer with an unlikely beginning. In early 2020, as the pandemic struck, Chris Penrose was let go from his job in the graphics industry. After receiving a small settlement package, he combined this with his life savings and used it to develop a music sequencer that operated in a distinctively different manner from anything else available. In October 2023, three years after starting the project, he was working full-time on Blockhead, supporting the project through a Patreon page even though the software was still in alpha mode.

The sequencer has gained a cult following made up of fans as much as users, enthusiastic to approach music-making from a different angle. It is not hard to see why, as in Blockhead everything is easily malleable, interactive, and modulatable. The software works in a cascade-like manner, with automation, instruments, and effects at the top of the sequencer affecting those beneath them. These can be shifted, expanded, and contracted easily.

When I speak to Chris, I encounter someone honest and self-deprecating, all of which I imagine contributes to people’s trust in the project. After all, you don’t find many promotional videos that contain the line ‘Obviously, this is all bullshit’. There is something refreshingly DIY and brave about what he is doing, and I am curious to know more about what motivated him, so arranged to talk with Chris via Zoom to discuss what set him off on this path.

What led you to approach music sequencing from this angle? There must be some quite specific thinking behind it.

I always had this feeling that if you have a canvas and you’re painting, there’s an almost direct cognitive connection between whatever you intend in your mind for this piece of art and the actual actions that you’re performing. You can imagine a line going from the top right to the bottom left of the canvas and there is a connection between this action that you’re taking with a paintbrush pressing against the canvas, moving from top right down to left.

Do you think that your time in the graphics industry helped shape your thinking on music?

When it comes to taking the idea of painting on a canvas and bringing it into the digital world, I think programs like Photoshop have fared very well in maintaining that cognitive mapping between what’s going on in your mind and what’s happening in front of you in the user interface. It’s a pretty close mapping between what’s going on physically with painting on a canvas and what’s going on with the computer screen, keyboard and mouse.

How do you see this compared to audio software?

It doesn’t feel like anything similar is possible in the world of audio. With painting, you can represent the canvas with this two-dimensional grid of pixels that you’re manipulating. With audio, it’s more abstract, as it’s essentially a timeline from one point to another, and how that is represented on the screen never really maps with the mind. Blockhead is an attempt to get a little closer to the kind of cognitive mapping between computer and mind, which I don’t think has ever really existed in audio programs.

Do you think other people feel similarly to you? There’s a lot of enthusiasm for what you doing, which suggests you tapped into something that might have been felt by others.

I have a suspicion that people think about audio and sound in quite different ways. For many the way that digital audio software currently works is very close to the way that they think about sound, and that’s why it works so well for them. They would look at Blockhead and think, well, what’s the point? But I have a suspicion that there’s a whole other group of people who think about audio in a slightly different way and maybe don’t even realise as there has never been a piece of software that represents things this way.

What would you like to achieve with Blockhead? When would you consider it complete?

Part of the reason for Blockhead is completely selfish. I want to make music again but I don’t want to make electronic music because it pains me to use the existing software as I’ve lost patience with it. So I decided to make a piece of audio software that worked the way I wanted it. I don’t want to use Blockhead to make music right now because it’s not done and whenever I try to make music with Blockhead, I’m just like, no, this is not done. My brain fills with reasons why I need to be working on Blockhead rather than working with Blockhead. So the point of Blockhead is just for me to make music again.

Can you describe your approach to music?

The kind of music that I make tends to vary from the start. I rarely make music that is just layers of things. I like adding little moments in the middle of these pieces that are one-off moments. For instance, a half-second filter sweep in one part of the track. To do that in a traditional DAW, you need to add a filter plugin to the track. Then that filter plugin exists for the entire duration of the track, even if you’re just using it for one moment. It’s silly that it has to exist in bypass mode or 0% wet for the entire track, except in this little part where I want it. The same is true of synthesizers. Sometimes I want to write just one note from a synthesizer at one point in time in the track.

Is it possible for you to complete the software yourself?

At the current rate, it’s literally never going to be finished. The original goal with Patreon was to make enough money to pay rent and food. Now I’m in an awkward position where I’m no longer worrying about paying rent, but it’s nowhere near the point of hiring a second developer. So I guess my second goal with funding would be to make enough money to hire a second person. I think one extra developer on the project would make a huge difference.

It is hard not to admire what Chris is doing. It is a giant project, and to have reached the stage that it has with only one person working on it is impressive. Whether the project continues to grow, and whether he can hire other people remains to be seen, but it is a testament to the importance of imagination in software design. What is perhaps most attractive of all, is how it is one person’s clear and undiluted vision of what this software should be, which has resonated with so many people across the world.

If you would like to find out more about the Blockhead or support the project you can visit its Patreon Page.

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.

The persistence of misogyny in music technology

Dom Aversano

DJ Isis photographed by Vera de Kok
DJ Isis photographed by Vera de Kok

Last week the British House of Commons Women and Equalities Committee released their report into misogyny in music. It reached a stark and clear conclusion.

In this Report we have focused on improving protections and reporting mechanisms, and on necessary structural and legislative reforms. The main problem at the heart of the music industry is none of these; it is the behaviour of men—and it is almost always men.

Although the report is specific to the United Kingdom many of its findings could apply to other countries. One universal problem is the tendency for some men to view misogyny as a woman’s problem, even though men have greater power to eliminate it. For those of us working with music technology, this needs to be taken to heart, as the field comes out very badly in the report, especially concerning the gender imbalance for music producers, engineers, and songwriters.

In 2022, just 187 women and non-binary people were credited as either producer or engineer on the top 50 streamed tracks in 14 genres, compared to 3,781 men. Of all songwriters and composers who received a royalty in 2020 from their music being streamed, downloaded, broadcast, or performed, only one in six (16.7%) were women.

Music technology education does not fare better.

Participation rates show that music technology courses still show a stark gender imbalance, reflecting the lack of female representation in the production workforce, despite the technology’s increasing importance to modern musicians.

After reading this I was curious to know how Music Hackspace shaped up in this regard. While far from a comprehensive analysis, I decided to count the number of female and male teachers on the Music Hackspace Website and discovered 32 female teachers (35%) and 58 male teachers (65%). This is far from equal, but at least better than the ‘stark gender imbalance’ mentioned in the report. However, until it is equal, it is not good enough.

On a personal note, when writing this blog I try to keep bias and discrimination at the front of my mind, but I am aware I interview more men than women. This is more complicated than simply my intentions. When invited for an interview men have generally been more forthcoming than women and tend to be easier to locate and contact, especially given they often have more prominence within the musical world. It is not hard to imagine why women might be more reluctant to subject themselves to public attention, as they are criticised more than men and taken less seriously. In the government report, many female artists and managers were regularly mistaken for girlfriends.

The misogyny women experience in the public eye was grotesquely demonstrated recently when X/Twitter was flooded with deepfakes porn images of singer Taylor Swift just a few days before this year’s Grammy Awards. One does not have to be a music superstar to be subjected to such abuse. Last year in the Spanish town of Almendralejo more than 28 girls aged from 11 to 17 had AI-generated naked images created of them, with 11 local boys having been involved in the creation and circulation of the images, demonstrating that such threats now exist across all levels of society.

This is to say nothing of the wider patriarchal socio-political forces at work. This year the world will again be subjected to a presidential run by the convicted sex offender Donald Trump, who has bragged about sexually assaulting women and described his daughter as “voluptuous”. He is not alone, with social media-savvy men like Jordan Peterson and Andrew Tate promoting their misogynistic ideas to mass audiences of boys and men. These misogynistic ideas have been demonstrated to be algorithmically amplified by platforms such as TikTok such that Gen Z boys are more likely than Baby Boomers to believe that feminism is harmful.

Music should set a better example and act as a counter-cultural force against these movements. Historically, music has been a driver of social change, as one can create and share ideas with millions of people across the world rapidly. Women’s participation in this artistic activity should be equal to that of men, and for as long as it is not, it is men’s responsibility to help redress the power imbalance. In this respect, I will finish with the same quote from the House of Commons report, which lays out the root of the problem starkly.

The main problem at the heart of the music industry (…) is the behaviour of men—and it is almost always men.

Click here for the full report Misogyny in Music by the British House of Commons Women and Equaliti

Move slow and create things

Dom Aversano

Over Christmas I took a week off, and no sooner had I begun to relax than an inspiring idea came to mind for a generative art piece for an album cover. The algorithm needed to make it was clear in my mind, but I did not want to take precious time away from family and friends to work on it. Then a thought occurred — could I build it quickly using ChatGPT?

I had previously resisted using Large Language Models (LLMs) in my projects for a variety of reasons. Would outsourcing coding gradually deskill me? Whose data was the system trained on and was I participating in their exploitation? Is the environmental effect of using such computationally intense technology justifiable?

Despite my reservations I decided to try it, treating it as an experiment that I could stop at any point. Shortly prior to this, I had read a thought-provoking online comment questioning whether manual coding might seem as peculiar and antiquated to the future as programming in binary does now. Could LLMs help make computers less rigid and fixed, opening up the world of programming to anyone?

While I had previously used ChatGPT to create some simple code for Supercollider, I had been unimpressed by the results. For this project, however, the quality of the code was different. Every prompt returned P5JS code that did exactly what I intended, without the need for clarification. I made precisely what I envisioned in less than 30 minutes. I was astonished. It was not the most advanced program, but neither was it basic.

Despite the success, I felt slightly uneasy. The computer scientist Grady Booch wrote that ‘every line of code represents an ethical and moral decision.’ It is tempting to lose sight of this amid a technological culture steeped in a philosophy of ‘move fast and break things’ and ‘it’s better to ask for forgiveness than permission’. So what specifically felt odd?

I arrived at what I wanted without much of a journey, learning little more than how to clarify my ideas to a machine. This is a stark contrast to the slow and meticulous manner of creation that gradually develops our skills and thinking, which is generally considered quintessential to artistic activity. Furthermore, although the arrival is quicker the destination is not exactly the same, since handcrafted code can offer a representation of a person’s worldview, whereas LLM code is standardised.

However, I am aware that historically many people — not least of all in the Arts and Crafts movement — expressed similar concerns, and one can argue that if machines dramatically reduce labourious work it could free up time for creativity. Removing the technical barrier to entry could allow many more people’s creative ideas to be realised. Yet efficiency is not synonymous with improvement, as anyone who has scanned a QR-code menu at a restaurant can attest.

The idea that LLMs could degrade code is plausible given that they frequently produce poor or unusable code. While they will surely improve, to what degree is unknown. A complicated project built from layers of machine-generated code may create layers of problems: short-term and long-term. Like pollution, its effects might not be obvious until they accumulate and compound over time. If LLMs are trained on LLM-generated code it could have a degradative effect, leading to a Model Collapse.

The ethics of this technology are equally complicated. The current lack of legislation around consent on training LLMs means many people are discovering that their books, music, or code has been used to train a model without their knowledge or permission. Beyond legislating, a promising idea has been proposed by programmer and composer Ed Newton-Rex, who has founded a company called Fairly Trained, which offers to monitor and certify different LLMs, providing transparency on how they were trained.

Finally, while it is hard to find accurate assessments of how much electricity these systems use, some experts predict they could soon consume as much electricity as entire countries, which should not be difficult to imagine given that the Bitcoin blockchain is estimated to consume more electricity than the whole of Argentina.

To return to Grady Booch’s idea that ‘every line of code represents an ethical and moral decision’ one could extend this to every interaction with a computer represents an ethical and moral decision. As the power of computers increases so should our responsibility, but given the rapid increases in computing power, it may be unrealistic to expect our responsibility to keep pace. Taking a step back to reflect does not make one a Luddite, and might be the most technically insightful thing to do. Only from a thoughtful perspective can we hope to understand the deep transformations occurring, and how to harness them to improve the world.

Steve Reich’s exploration of technology through music

Dom Aversano

Photo by Peter Aidu

New York composer Steve Reich did not just participate in the creation of a new style of classical music, he helped establish a new kind of composer. Previously, the word composer evoked an archetype of a quill-wielding child prodigy who had composed several symphonies before adulthood — finding perhaps its purest embodiment in the example of Amadeus Mozart — whereas Reich represented a composer who gradually and determinedly developed their talent in a more relatable manner. At the same age that Mozart was on his deathbed composing his Requiem, Reich was struggling to establish himself in New York, driving taxis to make ends meet.

A key source of Reich’s inspiration was atypical of the classical music tradition, in which composers tended to draw inspiration from nature, religion, romantic love, classical literature, and other art forms; by contrast, Reich’s career was ignited by ideas he derived from electronic machines.

In what is now musical folklore, the young composer set up two tape recorders in his home studio with identical recordings of the Pentecostal preacher Brother Walter proclaiming ‘It’s gonna rain’. Reich pressed play on both machines and to his astonishment found the loops were perfectly synchronised. That initial synchronisation then began to drift as one machine played slightly faster than the other, causing the loops to gradually move out of time, thereby giving rise to a panoply of fascinating acoustic and melodic effects that would be impossible to anticipate or imagine without the use of a machine. The experiment formed the basis for Reich’s famous composition It’s Gonna Rain and established the technique of phasing (I have written a short guide to Reich’s three forms of phasing beneath this article).

While most composers would have considered this a curious home experiment and moved on, Reich, ever the visionary, sensed something deeper that formed the basis for an intense period of musical experimentation lasting almost a decade. In a video explaining the creation of the composition, It’s Gonna Rain, he describes the statistical improbability of the two tape loops having been aligned.

And miraculously, you could say by chance, you could say by divine gift, I would say the latter, but you know I’m not going to argue about that, the sound was exactly in the centre of my head. They were exactly lined up.

To the best of my knowledge, it is the first time in classical music that someone attributed intense or divine musical inspiration to an interaction with an electronic machine. How one interprets the claim of divinity is irrelevant, the significant point is it demonstrates the influence of machines on modern music not simply as a tool, but as a fountain of ideas and profound inspiration.

In a 1970 interview with fellow composer Michael Nyman, Reich described his attitude and approach to the influence of machines on music.

People imitating machines was always considered a sickly trip; I don’t feel that way at all, emotionally (…) the kind of attention that kind of mechanical playing asks for is something we could do with more of, and the “human expressive quality” that is assumed to be innately human is what we could do with less of now.

While phasing became Reich’s signature technique, his philosophy was summed up in a short and fragmentary essay called Music as a Gradual Process. It contained insights into how he perceived his music as a deterministic process, revealed slowly and wholly to the listener.

I don’t know any secrets of structure that you can’t hear. We all listen to the process together since it’s quite audible, and one of the reasons it’s quite audible is because it’s happening extremely gradually.

Despite the clear influence of technology on Reich’s work, there also exists an intense criticism of technology that clearly distinguishes his thinking from any kind of technological utopianism. For instance, Reich has consistently been dismissive of electronic sounds and made the following prediction in 1970.

Electronic music as such will gradually die and be absorbed into the ongoing music of people singing and playing instruments.

His disinterest in electronic sounds remains to this day, and with the exception of the early work Pulse Music (1969), he has never used electronically synthesised sounds. However, this should not be confused with a sweeping rejection of modern technology or a purist attitude towards traditional instruments. Far from it.

Reich was an early adopter of audio samplers, using them to inset short snippets of speech and sounds into his music from the 1980s onwards. A clear demonstration of this can be found in his celebrated work Different Trains (1988). The composition documents the long train journeys Reich took between New York and Los Angeles from 1938 to 1941 when travelling between his divorced parents. He then harrowingly juxtaposed this with the train journeys happening at the same time in Europe, where Jews were being transported to death camps.

For the composition, Reich recorded samples of his governess who accompanied him on these journeys, a retired pullman porter who worked on the same train line, and three holocaust survivors. He transcribed their natural voice melodies and used them to derive melodic material for the string quartet that accompanies the sampled voices. This technique employs technology to draw attention to minute details of the human voice, that are easily missed without this fragmentary and repetitive treatment. As with Reich’s early composition, It’s Gonna Rain, it is a use of technology that emphasises and magnifies the humanity in music, rather than seeking to replace it.

Having trains act as a source of musical and thematic inspiration demonstrates, once again, Reich’s willingness to be inspired by machines, though he was by no means alone in this specific regard. There is a rich 20th-century musical tradition of compositions inspired by trains, including works such as jazz composer Duke Ellington’s Take the A Train, Brazilian composer Heitor Villa Lobos’s The Little Train of the Caipira, and the Finnish composer Kaija Saariaho’s Stilleben.

Reich’s interrogation of technology finally reaches its zenith in his large-scale work Three Tales — an audio-film collaboration with visual artist Beryl Korot. It examines three technologically significant moments of the 20th century: The Hindenburg disaster, the atom bomb testing at Bikini, and the cloning of Dolly the sheep. In Reich’s words, they concern ‘the physical, ethical, and religious nature of the expanding technological environment.’ As with Different Trains, Reich recorded audio samples of speech to help compose the music, this time using the voices of scientists and technologists such as Richard Dawkins, Jaron Lanier, and Marvin Minsky.

These later works have an ominous, somewhat apocalyptic feel, hinting at the possibility of a dehumanised and violent future, yet while maintaining a sense of the beauty and affection humanity contains. Throughout his career, Reich has used technology as both a source of inspiration and a tool for creation in a complicated relationship that is irreducible to sweeping terms like optimistic or pessimistic. Instead, Reich uses music to reflect upon some of the fundamental questions of our age, challenging us to ask ourselves what it means to be human in a hi-tech world.

 


A short guide to three phasing techniques Reich uses

There are three phasing techniques that I detect in Steve Reich’s early music which I will briefly outline.

First is a continuous form of phasing. A clear demonstration of this is in the composition It’s Gonna Rain (1965). With this phasing technique, the phase relationship between the two voices is not measurable in normal musical terms (e.g., ‘16th notes apart’ etc) but exists in a state of continuous change making it difficult to measure at any moment. An additional example of this technique can be heard in the composition Pendulum Music.

The second is a discrete form of phasing. A clear demonstration of this is the composition Clapping Music (1972). With this phasing technique, musicians jump from one exact phase position to another without any intermediary steps, making the move discrete rather than gradual. Since the piece is in a time cycle of 12 there are the same number of possible permutations, each of which is explored in the composition, thereby completing the full phase cycle.

The third is a combination of continuous and discrete phasing. A clear demonstration of this is Piano Phase (1967). With this phasing technique, musicians shift gradually from one position to another, settling in the new position for some time. In Piano Phase one musician plays slightly faster than the other until they reach their new phase position which they settle into for some time before making another gradual shift to another phase position. An additional example of this technique can be heard in the composition Drumming.

Music Hackspace is running an online workshop Making Generative Phase Music with Max/MSP Wednesday January 17th 17:00 GMT 

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.

Music Hackspace Christmas Quiz

Dom Aversano

History

  1. Which 19th-century mathematician predicted computer-generated music?
  2. What early electronic instrument did Oliver Messiaen use in his composition Trois petites liturgies de la Présence Divine? 
  3. Who invented FM synthesis? 
  4. What was the first name of the French mathematician and physicist who invented Fourier’s analysis? 
  5. Oramics was a form of synthesis invented by which British composer?

Synthesis

  1. What is the name given to an acoustically pure tone?
  1. What music programming language was named after a particle accelerator?
  1. What synthesiser did The Beatles use on their 1969 album Abbey Road?
  1. What microtonal keyboard is played in a band scene in La La Land? 
  1. What are the two operators called in FM synthesis? 

Music 

  1. What was the name of the Breakbeat that helped define jungle/drum and bass?
  1. IRCAM is based in which city? 
  1. Hip hop came from which New York Neighbourhood?
  1. Which genre-defining electronic music record label originated in Sheffield?
  1. Sonor Festival happens in which city?

General 

  1. Who wrote the book Microsound? 
  1. Who wrote the composition Kontakte? 
  1. How many movements is John Cage’s 4’33”?
  1. Who wrote the book On the Sensation of Tone?
  1. Which composer wrote the radiophonic work Stilleben?

Scroll down for the answers!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Answers

History

  1. Ida Lovelace
  2. Ondes Martenot
  3. John Chowning
  4. Joseph 
  5. Daphne Oram 

Synthesis

  1. Sine wave
  2. Supercollider
  3. The Moog
  4. Seaboard
  5. Carrier and Modulator 

Music 

  1. Amen Brother
  2. Paris
  3. The Bronx
  4. Warp Records
  5. Sonar

General

  1. Curtis Roads
  2. Karlheinz Stockhausen
  3. Three
  4. Hermann von Helmholtz
  5. Kaija Saariaho

Is music writing still relevant?

Dom Aversano

I recently listened to a podcast series by Sean Adams from Drowned in Sound which discusses the decline of music journalism as a profession (not to be conflated with music writing as a whole). It caused me to reflect on why I consider music writing valuable and important, even in an age where anyone can easily publish their thoughts. Why do the stories of music matter, and what would happen if they dissolved into digital chatter?

There’s a quote that is often wheeled to demonstrate the apparent futility of writing about music — one I found objectionable long before I ever considered music writing.

Writing about music is like dancing about architecture

This is attributed to all sorts of people: Frank Zappa, Laurie Anderson, and Elvis Costello. Probably none of them said it, and in the end, it doesn’t matter. Get a group of musicians together and they will talk about music for hours — so if talking is permitted, why is writing not? Both articulate thought, and as an experienced writer once told me, writing is just thinking clearly.

History is full of composers who wrote. Aaron Copland was a prolific writer, as was Arnold Schoenberg. Before them you had 19th-century composers writing essays on music in a similar way to how 21st-century musicians use social media. Some infamously, such as the master of self-promotion Richard Wagner, who filled an entire book with anti-Semitic bile.

There is no lack of writing in contemporary music culture either. Composers such as John Adams, Philip Glass, Errollyn Wallen and Gavin Bryars have all written autobiographies. Steve Reich recently published Conversations, a book that transcribes his conversations with various collaborators. In South India, the virtuoso singer and political activist T M Krishna is a prolific writer of books and articles on musicology and politics.

Given that music writing has a long and important history, the question that remains is: does it have contemporary relevance, or could the same insights be crowdsourced from the vast amount of information online? In short, do professional opinions on music still matter?

Unsurprisingly, I believe yes.

I do not believe that professional opinion should be reserved only for science, politics, and economics, but should apply to music and the arts too, and if we are truly no longer willing to fund artistic writing, what does this say about ourselves and our culture? Is music not a serious part of human existence?

Even if musicians at times feel antagonised by professional critics, they ultimately benefit from having experts document and analyse their art. This is not to suggest professionals cannot get it wrong; they most certainly can, as exemplified by this famous example where jazz criticism went seriously awry.

In the Nov. 23, 1961, DownBeat, Tynan wrote, “At Hollywood’s Renaissance Club recently, I listened to a horrifying demonstration of what appears to be a growing anti-jazz trend exemplified by these foremost proponents [Coltrane and Dolphy] of what is termed avant-garde music.

“I heard a good rhythm section… go to waste behind the nihilistic exercises of the two horns.… Coltrane and Dolphy seem intent on deliberately destroying [swing].… They seem bent on pursuing an anarchistic course in their music that can but be termed anti-jazz.”

Despite this commentary being way off the mark, it also acts as a historical record for how far ahead of the critics John Coltrane and Eric Dolphy were. Had the critics not documented their opinion, we would not know how this music — which sounds relatively tame by today’s standards — was initially received by some as ‘nihilistic’ and ‘anarchistic’. It is easy to point out the failure of the critics, but it also highlights how advanced Coltrane and Dolphy were.

Conversely, an example where music writing resonated with the Zeitgeist was Alex Ross’s book The Rest is Noise. This concise, entertaining history of 20th-century classical music was so influential it formed the curation for a year-long festival of music in London’s Southbank Centre. The event changed the artistic landscape of the city by making contemporary classical music accessible and intelligible while demonstrating it could sell out big concert halls. In essence, Ross did what composers had largely failed to do in the 20th century — he brought the public up-to-date and provided a coherent narrative for a century that felt confusing to many.

The peril of leaving this to social media was demonstrated by this year’s biggest-grossing film, Barbie. For the London press preview, social media influencers were given preference over film critics and told, ‘Feel free to share your positive feelings about the film on Twitter after the screening.’ I expected to find the film challenging and provocative but encountered something that felt bland, obvious, and devoid of nuance. I potentially got caught up in a wave of hype that used unskilled influencers and sidelined professional critics.

The world is undoubtedly changing at a rapid pace, and music writing must keep up with it. Some of what has disappeared, I do not miss, such as the ‘build them up to tear them down’ attitude of certain music journalism during the print era. Neither do I miss journalists being the gatekeepers of culture. For all the Internet’s faults, the fact that anyone can publish their work online and develop an audience without the need for an intermediary remains a marvel of the modern era.

However, as with all revolutions, there is a danger of being overzealous about the new at the expense of the old. Music is often referred to metaphorically as an ecosystem, yet given that we are a part of nature, surely it is an accurate description. Rip out large chunks of that ecosystem and the consequences may be that everything within it suffers.

For this reason, far from believing that writing about music is like dancing about architecture, I consider it a valuable way to make sense of and celebrate a beautiful art form. If that writing disappears, we will all be poorer for it.

So in the spirit of supporting contemporary music writers here is a non-exhaustive list of some writers whom I have benefitted from reading.

Alex Ross / The New Yorker

An authority on contemporary classical music and authour of The Rest is Noise. 

Philip Sherborne / Pitchfork & Substack

Experienced journalist specialising in experimental electronic music. 

Dr Leah Broad / Substack

A classical musical expert who analyses music through a feminist perspective. The authour of Quartet. 

Ted Gioia / Substack

Outspoken takes on popular culture and music from an ex-jazz pianist.  Authour of multiple books. 

Kate Molleson / BBC Radio

Scottish classical music critique who writes about subjects such as the Ethiopian nun/pianist/composer Emahoy Tsegué-Maryam Guèbrou.

T M Krishna / Various

One of the finest Carnatic music singers of his generation, a mountain climber, and a polemical left-wing voice in Indian culture. 

 

Can music help foster a more peaceful world?

Dom Aversano

Like many, in recent weeks I have looked on in horror at the war in the Middle East and wondered how such hatred and division is possible. Not simply from people directly involved in the war, but also from the entrenched and polarised discourse on social media from across the world. Don’t worry, I’m not about to give you another underinformed political opinion, but rather, I would like to explore the idea of whether music can help foster peace in the world, and help break down the polarisation and division fracturing our societies.

In 2017, when it was clear that polarisation and authoritarianism were on the rise, I bought myself a copy of the Yale historian Timothy Snyder’s book On Tyranny: Twenty Lessons from the Twentieth Century. Written as a listicle it is full of practical advice on living through strange political times and how to attempt to influence them for the better, with chapter titles such as ‘Defend institutions’, ‘Be kind to our language’, and ‘Make eye contact and small talk’.

What I found missing in the book was a robust call to defend the arts, despite this being one of the first things any would-be authoritarian might attack. I questioned, what is it about the arts that makes authoritarians feel instinctively threatened?

What follows are five reflections on why I think music is powerful in the face of inhumanity, and how we can use it to foster peace.

Music ignites the imagination

Whether by creating or listening to it, music ignites and awakens the imagination. Art allows us to envision other worlds. The composer Franz Schubert expressed this idea when praising Mozart in a diary entry he made on June 13th, 1816.

O Mozart, immortal Mozart, what countless images of a brighter and better world thou hast stamped upon our souls!

Conversely, without artists our collective imagination shrinks, priming people for conformity and fixations on a lost romantic past or grand nationalist future. This is not to say that art completely disappears, but that it becomes an empty vessel for state propaganda. Whereas liberating music allows us to imagine new realities.

Music offers society multiple leaders.

It is a cliché to write about Taylor Swift, but there is no denying she is influential. People filling out arenas to listen to Taylor Swift deflect attention from mesmeric demagogues like Donald Trump. It is an influence that cannot summon an army or change tax laws, but is powerful nevertheless. The singer has said she will campaign against America’s aspiring dictator in the coming US election. Perhaps having a billionaire singer telling people how to vote will do more harm than good, but what is certain is that she will be influential at a pivotal moment in history.

One does not need such dazzling fame to be significant. I count myself lucky enough to have been friends with the late electronic composer Mira Calix, who was also a passionate campaigner against Brexit and nationalism. At the last concert of her classical music, she used this moment on the stage to give a short but heartfelt defence of free movement. It was powerful, even if it went unreported.

While this type of power might seem intangible or questionable, it is more obvious when observed through the lens of history. In the 1960s-70s musicians’ protests against the Vietnam War and Cold War can plausibly said to have helped hasten the dismantlement of these conflicts, as they drew attention to the destructiveness and absurdity of the conflicts, while offering alternative visions for the future. In the immortal lyrics from Sun Ra’s Nuclear War, ‘If they push that button, your ass gotta go’. It’s hard to argue with that.

Music is uniting

There are exceptions to this, but music more generally unites than divides. Audiences are comprised of people who might otherwise be divided by politics, class, or religious/non-religious affiliations. Music can bypass belief and connect us to something deeper, that is common to all of us.

Unity applies to musicians too. Artists like Miles Davis, Duke Ellington, and Frank Zappa were not necessarily the best instrumentalists of their generation, but they formed the world’s best groups by picking the finest talent of their age. Without sophisticated collaboration, they would not have been capable of achieving everything they did. Their styles of bandleading may have ranged from the conventional to the eccentric, and they were by no means saints or role models, but they held groups together that demonstrated the creative power of collaboration.

Finally, unity can stretch across borders. Music allows one to appreciate the skill and expressivity of someone from a completely different culture and background, while gaining some insight into the way they experience the world. Having someone stir our emotion who is seemingly quite different to us acts as a reminder of their humanity, especially in cases where they have dehumanised or degraded. Under Narendra Modi’s rule of India, a strong anti-Muslim sentiment has spread, yet India’s finest tabla player is Zakir Hussain, a Muslim. Every time he plays he reminds people that beauty and dignity exist within all people.

Music makes you less rigid

Music rejects rigid ideologies. Simplistic and reductive models of music create sound worlds that are dull and predictable. To listen to or create music effectively one needs to be relaxed, flexible, and open to allowing in new forms of music, whether it is from a different region, style, or period of history. By doing so one’s internal world is enriched.

Purists are in contrast to this. Whether in classical music, jazz, or minimal techno, it represents a strict and exclusive mentality. To all but themselves — or a certain in-group — their position seem absurd, representing not a love of music but a love of one type of music, and if that music did not exist, what remains?

Music connects us with our emotions

While there may be many complex reasons why we listen to and create music, a simple one is to awaken and express our feelings. Healthy emotions like compassion, hope, or love, need to be felt to be genuine. If our emotional world shuts down, no level of societal status, wealth, or physical health will make us content.

A healthy music culture helps prevent cultural atmospheres dominated by fear and anger, where it becomes easier to divide people and whip up mobs. A lot is made of the importance of intellectual freedom, but it is equally important to be emotionally free. The hate, anger, and recriminations that have spread from the war in the Middle East could be tempered if people took some time to listen to or create music, by connects us to deeper emotions and creates a calm and peace that helps prevent us from fanning the flames of war.

For these reasons, I believe music can help foster a more peaceful world.