An interview with Blockhead creator Chris Penrose

Dom Aversano

A screenshot from Blockhead

Blockhead is an unusual sequencer with an unlikely beginning. In early 2020, as the pandemic struck, Chris Penrose was let go from his job in the graphics industry. After receiving a small settlement package, he combined this with his life savings and used it to develop a music sequencer that operated in a distinctively different manner from anything else available. In October 2023, three years after starting the project, he was working full-time on Blockhead, supporting the project through a Patreon page even though the software was still in alpha mode.

The sequencer has gained a cult following made up of fans as much as users, enthusiastic to approach music-making from a different angle. It is not hard to see why, as in Blockhead everything is easily malleable, interactive, and modulatable. The software works in a cascade-like manner, with automation, instruments, and effects at the top of the sequencer affecting those beneath them. These can be shifted, expanded, and contracted easily.

When I speak to Chris, I encounter someone honest and self-deprecating, all of which I imagine contributes to people’s trust in the project. After all, you don’t find many promotional videos that contain the line ‘Obviously, this is all bullshit’. There is something refreshingly DIY and brave about what he is doing, and I am curious to know more about what motivated him, so arranged to talk with Chris via Zoom to discuss what set him off on this path.

What led you to approach music sequencing from this angle? There must be some quite specific thinking behind it.

I always had this feeling that if you have a canvas and you’re painting, there’s an almost direct cognitive connection between whatever you intend in your mind for this piece of art and the actual actions that you’re performing. You can imagine a line going from the top right to the bottom left of the canvas and there is a connection between this action that you’re taking with a paintbrush pressing against the canvas, moving from top right down to left.

Do you think that your time in the graphics industry helped shape your thinking on music?

When it comes to taking the idea of painting on a canvas and bringing it into the digital world, I think programs like Photoshop have fared very well in maintaining that cognitive mapping between what’s going on in your mind and what’s happening in front of you in the user interface. It’s a pretty close mapping between what’s going on physically with painting on a canvas and what’s going on with the computer screen, keyboard and mouse.

How do you see this compared to audio software?

It doesn’t feel like anything similar is possible in the world of audio. With painting, you can represent the canvas with this two-dimensional grid of pixels that you’re manipulating. With audio, it’s more abstract, as it’s essentially a timeline from one point to another, and how that is represented on the screen never really maps with the mind. Blockhead is an attempt to get a little closer to the kind of cognitive mapping between computer and mind, which I don’t think has ever really existed in audio programs.

Do you think other people feel similarly to you? There’s a lot of enthusiasm for what you doing, which suggests you tapped into something that might have been felt by others.

I have a suspicion that people think about audio and sound in quite different ways. For many the way that digital audio software currently works is very close to the way that they think about sound, and that’s why it works so well for them. They would look at Blockhead and think, well, what’s the point? But I have a suspicion that there’s a whole other group of people who think about audio in a slightly different way and maybe don’t even realise as there has never been a piece of software that represents things this way.

What would you like to achieve with Blockhead? When would you consider it complete?

Part of the reason for Blockhead is completely selfish. I want to make music again but I don’t want to make electronic music because it pains me to use the existing software as I’ve lost patience with it. So I decided to make a piece of audio software that worked the way I wanted it. I don’t want to use Blockhead to make music right now because it’s not done and whenever I try to make music with Blockhead, I’m just like, no, this is not done. My brain fills with reasons why I need to be working on Blockhead rather than working with Blockhead. So the point of Blockhead is just for me to make music again.

Can you describe your approach to music?

The kind of music that I make tends to vary from the start. I rarely make music that is just layers of things. I like adding little moments in the middle of these pieces that are one-off moments. For instance, a half-second filter sweep in one part of the track. To do that in a traditional DAW, you need to add a filter plugin to the track. Then that filter plugin exists for the entire duration of the track, even if you’re just using it for one moment. It’s silly that it has to exist in bypass mode or 0% wet for the entire track, except in this little part where I want it. The same is true of synthesizers. Sometimes I want to write just one note from a synthesizer at one point in time in the track.

Is it possible for you to complete the software yourself?

At the current rate, it’s literally never going to be finished. The original goal with Patreon was to make enough money to pay rent and food. Now I’m in an awkward position where I’m no longer worrying about paying rent, but it’s nowhere near the point of hiring a second developer. So I guess my second goal with funding would be to make enough money to hire a second person. I think one extra developer on the project would make a huge difference.

It is hard not to admire what Chris is doing. It is a giant project, and to have reached the stage that it has with only one person working on it is impressive. Whether the project continues to grow, and whether he can hire other people remains to be seen, but it is a testament to the importance of imagination in software design. What is perhaps most attractive of all, is how it is one person’s clear and undiluted vision of what this software should be, which has resonated with so many people across the world.

If you would like to find out more about the Blockhead or support the project you can visit its Patreon Page.

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.

Steve Reich’s exploration of technology through music

Dom Aversano

Photo by Peter Aidu

New York composer Steve Reich did not just participate in the creation of a new style of classical music, he helped establish a new kind of composer. Previously, the word composer evoked an archetype of a quill-wielding child prodigy who had composed several symphonies before adulthood — finding perhaps its purest embodiment in the example of Amadeus Mozart — whereas Reich represented a composer who gradually and determinedly developed their talent in a more relatable manner. At the same age that Mozart was on his deathbed composing his Requiem, Reich was struggling to establish himself in New York, driving taxis to make ends meet.

A key source of Reich’s inspiration was atypical of the classical music tradition, in which composers tended to draw inspiration from nature, religion, romantic love, classical literature, and other art forms; by contrast, Reich’s career was ignited by ideas he derived from electronic machines.

In what is now musical folklore, the young composer set up two tape recorders in his home studio with identical recordings of the Pentecostal preacher Brother Walter proclaiming ‘It’s gonna rain’. Reich pressed play on both machines and to his astonishment found the loops were perfectly synchronised. That initial synchronisation then began to drift as one machine played slightly faster than the other, causing the loops to gradually move out of time, thereby giving rise to a panoply of fascinating acoustic and melodic effects that would be impossible to anticipate or imagine without the use of a machine. The experiment formed the basis for Reich’s famous composition It’s Gonna Rain and established the technique of phasing (I have written a short guide to Reich’s three forms of phasing beneath this article).

While most composers would have considered this a curious home experiment and moved on, Reich, ever the visionary, sensed something deeper that formed the basis for an intense period of musical experimentation lasting almost a decade. In a video explaining the creation of the composition, It’s Gonna Rain, he describes the statistical improbability of the two tape loops having been aligned.

And miraculously, you could say by chance, you could say by divine gift, I would say the latter, but you know I’m not going to argue about that, the sound was exactly in the centre of my head. They were exactly lined up.

To the best of my knowledge, it is the first time in classical music that someone attributed intense or divine musical inspiration to an interaction with an electronic machine. How one interprets the claim of divinity is irrelevant, the significant point is it demonstrates the influence of machines on modern music not simply as a tool, but as a fountain of ideas and profound inspiration.

In a 1970 interview with fellow composer Michael Nyman, Reich described his attitude and approach to the influence of machines on music.

People imitating machines was always considered a sickly trip; I don’t feel that way at all, emotionally (…) the kind of attention that kind of mechanical playing asks for is something we could do with more of, and the “human expressive quality” that is assumed to be innately human is what we could do with less of now.

While phasing became Reich’s signature technique, his philosophy was summed up in a short and fragmentary essay called Music as a Gradual Process. It contained insights into how he perceived his music as a deterministic process, revealed slowly and wholly to the listener.

I don’t know any secrets of structure that you can’t hear. We all listen to the process together since it’s quite audible, and one of the reasons it’s quite audible is because it’s happening extremely gradually.

Despite the clear influence of technology on Reich’s work, there also exists an intense criticism of technology that clearly distinguishes his thinking from any kind of technological utopianism. For instance, Reich has consistently been dismissive of electronic sounds and made the following prediction in 1970.

Electronic music as such will gradually die and be absorbed into the ongoing music of people singing and playing instruments.

His disinterest in electronic sounds remains to this day, and with the exception of the early work Pulse Music (1969), he has never used electronically synthesised sounds. However, this should not be confused with a sweeping rejection of modern technology or a purist attitude towards traditional instruments. Far from it.

Reich was an early adopter of audio samplers, using them to inset short snippets of speech and sounds into his music from the 1980s onwards. A clear demonstration of this can be found in his celebrated work Different Trains (1988). The composition documents the long train journeys Reich took between New York and Los Angeles from 1938 to 1941 when travelling between his divorced parents. He then harrowingly juxtaposed this with the train journeys happening at the same time in Europe, where Jews were being transported to death camps.

For the composition, Reich recorded samples of his governess who accompanied him on these journeys, a retired pullman porter who worked on the same train line, and three holocaust survivors. He transcribed their natural voice melodies and used them to derive melodic material for the string quartet that accompanies the sampled voices. This technique employs technology to draw attention to minute details of the human voice, that are easily missed without this fragmentary and repetitive treatment. As with Reich’s early composition, It’s Gonna Rain, it is a use of technology that emphasises and magnifies the humanity in music, rather than seeking to replace it.

Having trains act as a source of musical and thematic inspiration demonstrates, once again, Reich’s willingness to be inspired by machines, though he was by no means alone in this specific regard. There is a rich 20th-century musical tradition of compositions inspired by trains, including works such as jazz composer Duke Ellington’s Take the A Train, Brazilian composer Heitor Villa Lobos’s The Little Train of the Caipira, and the Finnish composer Kaija Saariaho’s Stilleben.

Reich’s interrogation of technology finally reaches its zenith in his large-scale work Three Tales — an audio-film collaboration with visual artist Beryl Korot. It examines three technologically significant moments of the 20th century: The Hindenburg disaster, the atom bomb testing at Bikini, and the cloning of Dolly the sheep. In Reich’s words, they concern ‘the physical, ethical, and religious nature of the expanding technological environment.’ As with Different Trains, Reich recorded audio samples of speech to help compose the music, this time using the voices of scientists and technologists such as Richard Dawkins, Jaron Lanier, and Marvin Minsky.

These later works have an ominous, somewhat apocalyptic feel, hinting at the possibility of a dehumanised and violent future, yet while maintaining a sense of the beauty and affection humanity contains. Throughout his career, Reich has used technology as both a source of inspiration and a tool for creation in a complicated relationship that is irreducible to sweeping terms like optimistic or pessimistic. Instead, Reich uses music to reflect upon some of the fundamental questions of our age, challenging us to ask ourselves what it means to be human in a hi-tech world.

 


A short guide to three phasing techniques Reich uses

There are three phasing techniques that I detect in Steve Reich’s early music which I will briefly outline.

First is a continuous form of phasing. A clear demonstration of this is in the composition It’s Gonna Rain (1965). With this phasing technique, the phase relationship between the two voices is not measurable in normal musical terms (e.g., ‘16th notes apart’ etc) but exists in a state of continuous change making it difficult to measure at any moment. An additional example of this technique can be heard in the composition Pendulum Music.

The second is a discrete form of phasing. A clear demonstration of this is the composition Clapping Music (1972). With this phasing technique, musicians jump from one exact phase position to another without any intermediary steps, making the move discrete rather than gradual. Since the piece is in a time cycle of 12 there are the same number of possible permutations, each of which is explored in the composition, thereby completing the full phase cycle.

The third is a combination of continuous and discrete phasing. A clear demonstration of this is Piano Phase (1967). With this phasing technique, musicians shift gradually from one position to another, settling in the new position for some time. In Piano Phase one musician plays slightly faster than the other until they reach their new phase position which they settle into for some time before making another gradual shift to another phase position. An additional example of this technique can be heard in the composition Drumming.

Music Hackspace is running an online workshop Making Generative Phase Music with Max/MSP Wednesday January 17th 17:00 GMT 

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.

Music Hackspace Christmas Quiz

Dom Aversano

History

  1. Which 19th-century mathematician predicted computer-generated music?
  2. What early electronic instrument did Oliver Messiaen use in his composition Trois petites liturgies de la Présence Divine? 
  3. Who invented FM synthesis? 
  4. What was the first name of the French mathematician and physicist who invented Fourier’s analysis? 
  5. Oramics was a form of synthesis invented by which British composer?

Synthesis

  1. What is the name given to an acoustically pure tone?
  1. What music programming language was named after a particle accelerator?
  1. What synthesiser did The Beatles use on their 1969 album Abbey Road?
  1. What microtonal keyboard is played in a band scene in La La Land? 
  1. What are the two operators called in FM synthesis? 

Music 

  1. What was the name of the Breakbeat that helped define jungle/drum and bass?
  1. IRCAM is based in which city? 
  1. Hip hop came from which New York Neighbourhood?
  1. Which genre-defining electronic music record label originated in Sheffield?
  1. Sonor Festival happens in which city?

General 

  1. Who wrote the book Microsound? 
  1. Who wrote the composition Kontakte? 
  1. How many movements is John Cage’s 4’33”?
  1. Who wrote the book On the Sensation of Tone?
  1. Which composer wrote the radiophonic work Stilleben?

Scroll down for the answers!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Answers

History

  1. Ida Lovelace
  2. Ondes Martenot
  3. John Chowning
  4. Joseph 
  5. Daphne Oram 

Synthesis

  1. Sine wave
  2. Supercollider
  3. The Moog
  4. Seaboard
  5. Carrier and Modulator 

Music 

  1. Amen Brother
  2. Paris
  3. The Bronx
  4. Warp Records
  5. Sonar

General

  1. Curtis Roads
  2. Karlheinz Stockhausen
  3. Three
  4. Hermann von Helmholtz
  5. Kaija Saariaho

Is music writing still relevant?

Dom Aversano

I recently listened to a podcast series by Sean Adams from Drowned in Sound which discusses the decline of music journalism as a profession (not to be conflated with music writing as a whole). It caused me to reflect on why I consider music writing valuable and important, even in an age where anyone can easily publish their thoughts. Why do the stories of music matter, and what would happen if they dissolved into digital chatter?

There’s a quote that is often wheeled to demonstrate the apparent futility of writing about music — one I found objectionable long before I ever considered music writing.

Writing about music is like dancing about architecture

This is attributed to all sorts of people: Frank Zappa, Laurie Anderson, and Elvis Costello. Probably none of them said it, and in the end, it doesn’t matter. Get a group of musicians together and they will talk about music for hours — so if talking is permitted, why is writing not? Both articulate thought, and as an experienced writer once told me, writing is just thinking clearly.

History is full of composers who wrote. Aaron Copland was a prolific writer, as was Arnold Schoenberg. Before them you had 19th-century composers writing essays on music in a similar way to how 21st-century musicians use social media. Some infamously, such as the master of self-promotion Richard Wagner, who filled an entire book with anti-Semitic bile.

There is no lack of writing in contemporary music culture either. Composers such as John Adams, Philip Glass, Errollyn Wallen and Gavin Bryars have all written autobiographies. Steve Reich recently published Conversations, a book that transcribes his conversations with various collaborators. In South India, the virtuoso singer and political activist T M Krishna is a prolific writer of books and articles on musicology and politics.

Given that music writing has a long and important history, the question that remains is: does it have contemporary relevance, or could the same insights be crowdsourced from the vast amount of information online? In short, do professional opinions on music still matter?

Unsurprisingly, I believe yes.

I do not believe that professional opinion should be reserved only for science, politics, and economics, but should apply to music and the arts too, and if we are truly no longer willing to fund artistic writing, what does this say about ourselves and our culture? Is music not a serious part of human existence?

Even if musicians at times feel antagonised by professional critics, they ultimately benefit from having experts document and analyse their art. This is not to suggest professionals cannot get it wrong; they most certainly can, as exemplified by this famous example where jazz criticism went seriously awry.

In the Nov. 23, 1961, DownBeat, Tynan wrote, “At Hollywood’s Renaissance Club recently, I listened to a horrifying demonstration of what appears to be a growing anti-jazz trend exemplified by these foremost proponents [Coltrane and Dolphy] of what is termed avant-garde music.

“I heard a good rhythm section… go to waste behind the nihilistic exercises of the two horns.… Coltrane and Dolphy seem intent on deliberately destroying [swing].… They seem bent on pursuing an anarchistic course in their music that can but be termed anti-jazz.”

Despite this commentary being way off the mark, it also acts as a historical record for how far ahead of the critics John Coltrane and Eric Dolphy were. Had the critics not documented their opinion, we would not know how this music — which sounds relatively tame by today’s standards — was initially received by some as ‘nihilistic’ and ‘anarchistic’. It is easy to point out the failure of the critics, but it also highlights how advanced Coltrane and Dolphy were.

Conversely, an example where music writing resonated with the Zeitgeist was Alex Ross’s book The Rest is Noise. This concise, entertaining history of 20th-century classical music was so influential it formed the curation for a year-long festival of music in London’s Southbank Centre. The event changed the artistic landscape of the city by making contemporary classical music accessible and intelligible while demonstrating it could sell out big concert halls. In essence, Ross did what composers had largely failed to do in the 20th century — he brought the public up-to-date and provided a coherent narrative for a century that felt confusing to many.

The peril of leaving this to social media was demonstrated by this year’s biggest-grossing film, Barbie. For the London press preview, social media influencers were given preference over film critics and told, ‘Feel free to share your positive feelings about the film on Twitter after the screening.’ I expected to find the film challenging and provocative but encountered something that felt bland, obvious, and devoid of nuance. I potentially got caught up in a wave of hype that used unskilled influencers and sidelined professional critics.

The world is undoubtedly changing at a rapid pace, and music writing must keep up with it. Some of what has disappeared, I do not miss, such as the ‘build them up to tear them down’ attitude of certain music journalism during the print era. Neither do I miss journalists being the gatekeepers of culture. For all the Internet’s faults, the fact that anyone can publish their work online and develop an audience without the need for an intermediary remains a marvel of the modern era.

However, as with all revolutions, there is a danger of being overzealous about the new at the expense of the old. Music is often referred to metaphorically as an ecosystem, yet given that we are a part of nature, surely it is an accurate description. Rip out large chunks of that ecosystem and the consequences may be that everything within it suffers.

For this reason, far from believing that writing about music is like dancing about architecture, I consider it a valuable way to make sense of and celebrate a beautiful art form. If that writing disappears, we will all be poorer for it.

So in the spirit of supporting contemporary music writers here is a non-exhaustive list of some writers whom I have benefitted from reading.

Alex Ross / The New Yorker

An authority on contemporary classical music and authour of The Rest is Noise. 

Philip Sherborne / Pitchfork & Substack

Experienced journalist specialising in experimental electronic music. 

Dr Leah Broad / Substack

A classical musical expert who analyses music through a feminist perspective. The authour of Quartet. 

Ted Gioia / Substack

Outspoken takes on popular culture and music from an ex-jazz pianist.  Authour of multiple books. 

Kate Molleson / BBC Radio

Scottish classical music critique who writes about subjects such as the Ethiopian nun/pianist/composer Emahoy Tsegué-Maryam Guèbrou.

T M Krishna / Various

One of the finest Carnatic music singers of his generation, a mountain climber, and a polemical left-wing voice in Indian culture. 

 

Can music help foster a more peaceful world?

Dom Aversano

Like many, in recent weeks I have looked on in horror at the war in the Middle East and wondered how such hatred and division is possible. Not simply from people directly involved in the war, but also from the entrenched and polarised discourse on social media from across the world. Don’t worry, I’m not about to give you another underinformed political opinion, but rather, I would like to explore the idea of whether music can help foster peace in the world, and help break down the polarisation and division fracturing our societies.

In 2017, when it was clear that polarisation and authoritarianism were on the rise, I bought myself a copy of the Yale historian Timothy Snyder’s book On Tyranny: Twenty Lessons from the Twentieth Century. Written as a listicle it is full of practical advice on living through strange political times and how to attempt to influence them for the better, with chapter titles such as ‘Defend institutions’, ‘Be kind to our language’, and ‘Make eye contact and small talk’.

What I found missing in the book was a robust call to defend the arts, despite this being one of the first things any would-be authoritarian might attack. I questioned, what is it about the arts that makes authoritarians feel instinctively threatened?

What follows are five reflections on why I think music is powerful in the face of inhumanity, and how we can use it to foster peace.

Music ignites the imagination

Whether by creating or listening to it, music ignites and awakens the imagination. Art allows us to envision other worlds. The composer Franz Schubert expressed this idea when praising Mozart in a diary entry he made on June 13th, 1816.

O Mozart, immortal Mozart, what countless images of a brighter and better world thou hast stamped upon our souls!

Conversely, without artists our collective imagination shrinks, priming people for conformity and fixations on a lost romantic past or grand nationalist future. This is not to say that art completely disappears, but that it becomes an empty vessel for state propaganda. Whereas liberating music allows us to imagine new realities.

Music offers society multiple leaders.

It is a cliché to write about Taylor Swift, but there is no denying she is influential. People filling out arenas to listen to Taylor Swift deflect attention from mesmeric demagogues like Donald Trump. It is an influence that cannot summon an army or change tax laws, but is powerful nevertheless. The singer has said she will campaign against America’s aspiring dictator in the coming US election. Perhaps having a billionaire singer telling people how to vote will do more harm than good, but what is certain is that she will be influential at a pivotal moment in history.

One does not need such dazzling fame to be significant. I count myself lucky enough to have been friends with the late electronic composer Mira Calix, who was also a passionate campaigner against Brexit and nationalism. At the last concert of her classical music, she used this moment on the stage to give a short but heartfelt defence of free movement. It was powerful, even if it went unreported.

While this type of power might seem intangible or questionable, it is more obvious when observed through the lens of history. In the 1960s-70s musicians’ protests against the Vietnam War and Cold War can plausibly said to have helped hasten the dismantlement of these conflicts, as they drew attention to the destructiveness and absurdity of the conflicts, while offering alternative visions for the future. In the immortal lyrics from Sun Ra’s Nuclear War, ‘If they push that button, your ass gotta go’. It’s hard to argue with that.

Music is uniting

There are exceptions to this, but music more generally unites than divides. Audiences are comprised of people who might otherwise be divided by politics, class, or religious/non-religious affiliations. Music can bypass belief and connect us to something deeper, that is common to all of us.

Unity applies to musicians too. Artists like Miles Davis, Duke Ellington, and Frank Zappa were not necessarily the best instrumentalists of their generation, but they formed the world’s best groups by picking the finest talent of their age. Without sophisticated collaboration, they would not have been capable of achieving everything they did. Their styles of bandleading may have ranged from the conventional to the eccentric, and they were by no means saints or role models, but they held groups together that demonstrated the creative power of collaboration.

Finally, unity can stretch across borders. Music allows one to appreciate the skill and expressivity of someone from a completely different culture and background, while gaining some insight into the way they experience the world. Having someone stir our emotion who is seemingly quite different to us acts as a reminder of their humanity, especially in cases where they have dehumanised or degraded. Under Narendra Modi’s rule of India, a strong anti-Muslim sentiment has spread, yet India’s finest tabla player is Zakir Hussain, a Muslim. Every time he plays he reminds people that beauty and dignity exist within all people.

Music makes you less rigid

Music rejects rigid ideologies. Simplistic and reductive models of music create sound worlds that are dull and predictable. To listen to or create music effectively one needs to be relaxed, flexible, and open to allowing in new forms of music, whether it is from a different region, style, or period of history. By doing so one’s internal world is enriched.

Purists are in contrast to this. Whether in classical music, jazz, or minimal techno, it represents a strict and exclusive mentality. To all but themselves — or a certain in-group — their position seem absurd, representing not a love of music but a love of one type of music, and if that music did not exist, what remains?

Music connects us with our emotions

While there may be many complex reasons why we listen to and create music, a simple one is to awaken and express our feelings. Healthy emotions like compassion, hope, or love, need to be felt to be genuine. If our emotional world shuts down, no level of societal status, wealth, or physical health will make us content.

A healthy music culture helps prevent cultural atmospheres dominated by fear and anger, where it becomes easier to divide people and whip up mobs. A lot is made of the importance of intellectual freedom, but it is equally important to be emotionally free. The hate, anger, and recriminations that have spread from the war in the Middle East could be tempered if people took some time to listen to or create music, by connects us to deeper emotions and creates a calm and peace that helps prevent us from fanning the flames of war.

For these reasons, I believe music can help foster a more peaceful world.

 

 

 

 

 

 

Should fair use allow AI to be trained on copyrighted music?

Dom Aversano

This week the composer Ed Newton-Rex brought the ethics of AI into focus when he resigned from his role in the Audio team at Stability AI, citing a disagreement with the fair use argument used by his ex-employer to justify training its generative AI models on copyrighted works.

In a statement posted on Twitter/X he explained the reasons for his resignation.

For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment. This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with.
I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is “the effect of the use upon the potential market for or value of the copyrighted work”. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use.

As Newton-Rex states, this is quite a standard argument made by companies using copyright material to train their AI. In fact, Stability AI recently submitted a 23-page document to the US Copyright Office arguing their case. Within it, they state they have trained their Stable Audio model on ‘800,000 recordings and corresponding songs’ going on to state.

These models analyze vast datasets to understand the relationships between words, concepts, and visual, textual or musical features ~ much like a student visiting a library or an art gallery. Models can then apply this knowledge to help a user produce new content, This learning process is known as training.

This highly anthropomorphised argument is at least very questionable. AI models are not like students for obvious reasons: they do not have a body, do not have emotions, and have no life experience. Furthermore, as Stability AI’s own document testifies, they do not learn in the same way that humans learn; if a student were to study 800,000 pieces of music over a ten-year period that would require analysing 219 different songs a day.

The contrast in how humans learn and think was highlighted by the American linguist and cognitive scientist Noam Chomsky in his critique of Large Language Models (LLMs).

The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.

A lot of this issue is further complicated by the language emerging from the AI community, which varies from anthropomorphic (‘co-pilot’) to deistic (‘godlike’) to apocalyptic (‘breakout scenarios’). Specifically with Stability AI, the company awkwardly evokes Abraham Lincon’s Gettysburg Address when writing on their website that they are creating ‘AI by the people for the people’ with the ambition of ‘building the foundation to activate humanity’s potential’.

While of course, they are materially different circumstances there is nevertheless a certain echo here of the civilising mission used to morally rationalise the economic rapaciousness of empire. To justify the permissionless use of copyrighted artwork on the basis of a mission to ‘activate humanity’s potential’ in a project ‘for the people’ is excessively moralistic and unconvincing. If Stability AI wants their project to be ‘by the people’ they should have artists explicitly opt-in before using their work, but the problem with this is that many will not, rendering the models perhaps not useless, but greatly less effective.

This point was underscored by venture capital fund Andreessen Horowitz who recently released a rather candid statement to this effect.

The bottom line is this: imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development.

Although in principle supportive of generative AI Newton-Rex does not ignore the economic realities behind the development of AI. In a statement that I will finish with, he succinctly and eloquently brings into focus the power imbalance at play and its potential destructiveness

Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.

If you have an opinion you would like to share on this topic please feel free to comment below.

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.

In defence of the Iklectik Art Lab

Dom Aversano

There are moments when you know you’ve really travelled down an avant-garde rabbit hole. I experienced this when watching the essay documentary on Soviet Era synthesisers Elektro Moskva, at the Iklectik Art Lab in Lambeth South London. At one point in the film, a synth is played that divides the octave into something like 70 tones. By most definitions, it was not a pleasant sound, and it wasn’t just me who thought so. Tony, the resident cat, had enough, and let out a prolonged howl that drowned out the sound of the synth and turned everyone in the audience’s heads around to focus their attention on this alpha feline, in what felt like a clear admonishment from the animal kingdom; having conquered the world, did we not have something better to do than listen to the sound of a (frankly crap) synth droning away in the crumbling remains of a communist dystopia?

Well, Tony, sorry to disappoint you, but no.

The value of Iklectik to London’s music scene is hard to quantify, as it has made space for many artistic activities that might otherwise be filtered out, and not least of all, the music hacking scene. The acoustic music hacking group Hackoustic has put on regular events in the appropriately named Old Paradise Yard for about 8 years. In no small part, this is because Eduard Solaz and Isa Barzizza have always been gracious hosts, willing to sit down with artists and treat them with respect and fairness. Unfortunately, it appears that this has not been reciprocated by the owners of the land, who are now warning of imminent eviction and wish to transform the land into the kind of homogenous office space that turns metropolises into overpriced, hollowed-out, dull places.

I spoke to the founder of Iklectik, Eduard Solaz, who had the following to say.

Why are you being evicted from Old Paradise Yard and when are you expected to leave?


This decision came quickly after the Save Waterloo Paradise campaign mobilised nearly 50,000 supporters and persuaded Michael Gove to halt the development project, something we have been campaigning for over this last year. Our public stance against the controversial plans has resulted in this punitive action against IKLECTIK and the other 20 small businesses here at Old Paradise Yard. Currently, despite not yet having permission for the full redevelopment, Guy’s and St Thomas’ Foundation are refusing to extend Eat Work Art’s (the site leaseholder) lease.

What impact will this development have on the arts and the environment?

For more than nine years, we, along with musicians, artists, and audiences, have collaboratively cultivated a unique space where individuals can freely explore and showcase groundbreaking music and art while experiencing the forefront of experimental creativity. London needs, now more than ever, to safeguard grassroots culture.


From an environmental perspective, this development is substantial and is expected to lead to a significant CO2 emissions footprint. Consequently, it poses a potential threat to Archbishop’s Park, a Site of Importance for Nature Conservation that serves as a vital green space for Lambeth residents and is home to a diverse range of wildlife. It also puts Westminster’s status as a Unesco World Heritage sight at risk.

Do you see hope in avoiding the eviction, and if so, what can people do to prevent it?

There is hope. In my opinion, the GSTT Foundation, operating as a charitable organisation, should reconsider its decision and put an end to this unjust and distressing situation. We encourage all of our supporters to reach out to the foundation and advocate for an end to this unfair eviction.

Here you can find more information to help us: https://www.iklectik.org/saveiklectik

To get a sense of what this means for London’s music hacking community I also spoke to Tom Fox, a lead organiser for Hackoustic, who put on regular nights at Iklectik.

Can you describe why Iklectik is significant to you and the London arts scene? 

Iklectik is one of London’s hidden gems, and as arts venues all over the UK are dying out, it has been a really important space for people like us to be able to showcase our work. We’ve had the privilege of hosting well over 100 artists in this space through the Hackoustic Presents nights and it helped us, and others, find their tribe. We’ve made so many friends, met their families, met their kids, found like-minded people and collaborated on projects together. We’ve had people sit in the audience, and get inspired by artists who then went on to make their own projects and then present with us. Some of our artists have met their life partners at our events! The venue isn’t just a place to watch things and go home, they’re meeting places, networking places, social gatherings and a place to get inspired. I doubt all of these things would have happened if Iklectik weren’t such a special place, run by such special people. 

Do you think there is a possibility that Michael Gove might listen?

It’s the hope that gets you! I’m a big believer in hope. It’s a very powerful thing. I don’t have much hope in Michael Gove, however. Or the current government in general. But, you know, there’s always hope. 

To take action.

Undoubtedly, Iklectik is up against a bigger opponent, but it is not a foregone conclusion, especially since Michael Gove has halted the development. There is a genuine opportunity for Old Paradise Yard to stay put.

Here is what you can do to help…

On Iklectik’s website, there are four actions that can be taken to help try to prevent the eviction. In particular write to Michael Gove and write to the GSTT Foundation.

For those in the UK, you can attend Hackoustic’s event this Saturday 11th November.

Having collaborated with the Iklectik Art Lab, we at Music Hackspace would like to wish Eduard Solaz, Isa Barzizza, and all the other artists and people who work at Old Paradise Yard the best in their struggle to remain situated there.

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.

Live coding – an interview with Lizzie Wilson (Digital Selves)

Dom Aversano

Photo by Antonio Roberts / @hellocatfood

The live coding scene has been growing over the years. Despite this, for many the idea of watching someone create music in code might not have an immediate appeal, though live coders are now playing at top night clubs, experimental music venues, and festivals. As the world becomes more code literate it is likely to become more popular. 

Curious to know more about these digital improvisers I sat down for a chat with Lizzie Wilson (Digital Selves) who is a leading musician in the field who learned the art of music coding in the lively scene in Leeds, but now lives in London. 

Did you have a background in either music or programming before you got into live coding?

I grew up with traditional instruments playing piano and guitar, though I always found I had a bit of trouble with coordination. I also found it limiting to be tied down to expressing ideas musically through physicality. I always did that on the side and really enjoyed it. Then I studied mathematics at university. So not really coding, but obviously it underpins a lot of the ideas in coding.

I didn’t start coding until I found out about Algorave and live coding. It was through one of my good friends, Joanne Armitage. My friends would run little house parties and she would rock up with a laptop and start doing live coding, and I remember seeing it and thinking, Oh, that’s really cool, I’d love to do a bit of this.

Which city was this?

At the time I was based in Leeds, Yorkshire, because that’s where many people were based, and there was a lot happening in the city. This was around 2015/2016.

I didn’t know much about coding or how to code. So I started to learn a bit and pick stuff up, and it felt really intuitive and fast to learn. So it was a really exciting experience for me.

It’s quite rare to find coding intuitive or easy to learn.

Yeah, I had tried a few more traditional ways. I bought a MIDI keyboard and Ableton. While I really enjoyed that, there was something about live coding that made me spend a whole weekend not talking to anyone and just getting really into it. I think that’s, as you say, quite rare, but it’s exciting when it happens.

That’s great. Were you using Tidal Cycles?

Yeah, it was Tidal Cycles. So Joanne was using Supercollider, which is, you know, a really big program. When I first started I wanted to use Supercollider because that was all I knew about. So I tried to learn Supercollider, but there were a lot of audio concepts that I didn’t know about at that time and it was very coding intensive. It was quite a lot for someone who didn’t know much about either at the time, so I never really got into Supercollider.

Then I went to an algorave in Leeds and I saw Alex McLean performing using Tidal Cycles. I remember that performance really well. The weekend after I thought, you know, I’m going to download this and try it out. At that time Alex — who wrote the software — was running a lot of workshops and informal meetups in the area. So there was a chance to meet up with other people who were interested in it as well.

Tidal Cycles Code / by Lizzie Wilson

Was this a big thing in Leeds at that time?

Yes, definitely around Yorkshire. I’m sure there were people in London in the late 2000s that were starting off. In the early 2010s, there were a lot of people working, because people were employed by universities in Yorkshire, and it’s got this kind of academic adjacent vibe, with people organising conferences around live coding.

There was a lot happening in Yorkshire around that time, and there still is. Sheffield now tends to be the big place where things are based, but we’re starting to create communities down in London as well and across the UK. So yes, I think Yorkshire is definitely the informal home of it.

I’m curious about what you said earlier about the limitations of physicality. To invert that — what do you consider the liberating ideas that drew you to code and made it feel natural for you?

I think it being so tightly expressed in language. I like to write a lot anyway, so that makes it very intuitive for someone like me. I like to think through words. So I can type out exactly what I want a kick drum sample to do: play two times as fast, or four times as fast. Using words to make connections between what you want the sounds to do is what drew me to it, and I think working this way allows you to achieve a level of complexity quite quickly.

With a traditional digital audio workstation, you have to copy and paste a lot and decide, for instance, on the fourth time round I want to change this bar, and then you have to zoom into where the fourth set of notes is. There’s a lot of copy and pasting and manual editing. I found being able to express an idea in a really conjunct and satisfying way in code exciting. It allows you to achieve a level of complexity quite quickly that produces interesting music.

The live aspect of performance places you within a deeper tradition of improvisation, however, code is more frequently associated with meticulous engineering and planning. How does improvisation work with code?

I think that’s something really interesting. If you think about coding in general, it tends to be, say, you want to make a product, so you go away and you write some code and it does something. This way of coding is very different because you do something, you try it out, and then you ask, does this work? Yes or no. That’s kind of cool, and you see the process happening in real-time, rather than it just being a piece of code that is run and then produces a thing.

Photo by Jonathan Reus

Part of the thrill of improvisation comes from the risk of making mistakes publicly, which makes it exciting for the audience and for the artists. How do you feel about improvising?

At first, I always found it quite scary, whereas now I find it enjoyable. That is not to say I am completely fine now, but you get through this process of learning to accept the error or learning to go where it takes you. So yeah, I find the level of unpredictability and never knowing what’s going to happen a really interesting part of it.

How much of an idea do you have of what you’re going to do before performing?

There are people who are a bit more purist and start completely from scratch. They do this thing called blank slate coding, where they have a completely blank screen and then over the performance they build it up. The more time you spend learning the language, the more you feel confident at accessing different ideas or concepts quicker, but I like to have a few ideas and then improvise around them. When I start performing I have some things written on the screen and then I can work with them.

It’s not like one way is more righteous than the other, and people are quite accepting of that. You don’t have to start completely from scratch to be considered coding, but there are different levels of blindness and improvisation that people focus on.

It seems like there are more women involved in live coding than in traditional electronic scenes. Is that your experience?

Yes, and there has been a conscious effort to do that. It’s been the work of a lot of other women before me who’ve tried hard to make sure that if we’re putting on a gig there are women involved in the lineup. This also raises questions like, how do we educate other women? How do we get them to feel comfortable? With women specifically, the idea of failure and of making mistakes can be difficult. There is some documentation on this, for instance, a paper by Dr Joanna Armitage, Spaces to Fail In, that I think is really interesting and can help with how to explore this domain.

It’s not just women though. I think there are other areas that we could improve on. Live coding is not a utopia, but I think people are trying to make it as open a space as possible. I think this reflects some of the other ideas of open-source software, like freedom and sharing.

Introversion of Sacrifice EP by Digital Selves (Lizzie Wilson)

What other live coders inspire you?

I would say, if I’m playing around the UK, I would always watch out for sets from +777000 (with Nunez on visuals), Michael-Jon Mizra, yaxu, heavy lifting, dundas, Alo Alik, eye measure, tyger blue plus visualists hellocatfood, Joana Chicau and Ulysses Popple, mahalia h-r

More internationally, I really like the work of Renick Bell, spednar, {arsonist}, lil data, nesso, hogobogobogo & gibby-dj 

If someone goes to an algorave what can they expect? Is the audience mostly participants, or is there an audience for people who don’t code?

I think you always get a mixture of both. There are some people who are more interested in reading and understanding the code. Often they forget to dance because they’re just standing there and thinking, but there is dancing. There should be dancing! I feel like, if you’re making dance music, it’s nice when people actually dance to it!

It depends on the person as well. There are people who are a lot more experimental and make harsh noise that pushes the limits of what is danceable. Then there are people who like to make music that is very danceable, beat-driven, and arranged. If you go to an Algorave you wouldn’t expect to have one end of the spectrum or other, you will probably get a bit of both.

Over the past few years, we’ve done quite a few shows in London at Corsica Studios, which is a very traditional nightclub space, with a large dark room and a big sound system, as well as more experimental art venues like Iklectik, which is also in London. Then there’s the other end of the spectrum where people do things in a more academic setting. So it’s spread out through quite a lot of places.

My personal favourite is playing in clubs where people actually dance, because I think that’s more fun and exciting than say art galleries, where it’s always a bit sterile. It’s not as fun as being in a place where the space really invites you to let go a little bit and dance. That’s the nice thing about playing in clubs.

Bandcamp recently got bought by Songtradr who then proceeded to lay off 50% of the staff. Traditionally Bandcamp has been seen as an Oasis for independent recording musicians, amid what otherwise are generally considered a series of bad options. Do you have any thoughts on this, especially given that you have released music on Bandcamp?

When I’ve done releases before we haven’t released with Spotify. I’ve only done releases through Bandcamp because as you say, it felt like this safe space for artists, or an Oasis. It was the one platform where artists weren’t held to ransom for releasing their own music. It’s been a slow decline, having been acquired by Epic Games last year. When that happened I winced a little bit, because it was like, well, what’s going to happen now? It felt quite hard to trust that they were going to do anything good with it.

Obviously, it’s hard. I think the solution is for more people to run independent projects, co-ops, and small ventures. Then to find new niches and new ways for musicians to exist and coexist in music, get their releases out, and think of new solutions to support artists and labels. At times like this, it’s always a bit, you know, dampened by this constant flow of like, oh, we’ve got this platform that’s made for artists and now it’s gone, but people always find ways. Bandcamp came out of a need for a new kind of platform. So without it, maybe there’ll be something else that will come out of the new need.

I’m hopeful. I like to be hopeful.

To discover more about Lizzie Wilson (Digital Selves) you can follow the links to her website, Bandcamp, Twitter, and Instagram

Can AI help us make humane and imaginative music?

Dom Aversano

There is a spectrum upon which AI music software exists. On one end are programs which create entire compositions, and on the other are programs that help people create music. In this post I will focus on the latter part of the spectrum, and ask the question, can AI help us compose and produce music in humane and imaginative ways? I will explore this question through a few different AI music tools.

Tone Transfer / Google

For decades the dominance of keyboard interaction has constrained computer music. Keyboards elegantly arrange a large number of notes but limit the control of musical parameters beyond volume and duration. Furthermore, with the idiosyncratic arrangement of a keyboard’s notes, it is hard to work — or even think — outside of the 12-note chromatic scale. Even with the welcome addition of pitch modulation wheels and microtonal pressure-sensitive keyboards such as Roli’s fascinating Seaboard, keyboards still struggle to express the nuanced pitch and amplitude modulations quintessential to many musical cultures.

For this reason, Magenta’s Tone Transfer may represent a potentially revolutionary change in computer music interaction. It allows you to take a sound or melody from one instrument and transform it into a completely different-sounding instrument while preserving the subtleties and nuances of the original performance. A cello melody can be transformed into a trumpet melody, the sound of birdsong into fluttering flute sounds, or a sung melody converted into a number of traditional concert instruments. It feels like the antidote to autotune, a tool that captures the nuance, subtly, and humanity of the voice, while offering the potential to transform it into something quite different.

In practice, the technology falls short of its ambitions. I sang in a melody and transformed it into a flute sound, and while my singing ability is unlikely to threaten the reputation of Ella FitzGerald, the flute melody that emerged sounded like the flautist was drunk. However, given the pace at which machine learning is progressing, one can expect it to be much more sophisticated in the coming years, and I essentially regard this technology as an early prototype.

Google has admirably made the code open source and the musicians who helped train the machine learning algorithms are prominently credited for their work. You can listen to audio snippets of the machine learning process, and hear the instrument evolve in complexity after 1 hour, 3 hours, and 10 hours of learning.

It is not just Google developing this type of technology — groups like Harmonai and Neutone doing similar things and any one of them stands to transform computer music interaction, by anchoring us back into the most universal instrument, the human voice.

Mastering / LANDR

Although understanding how mastering works is relatively straightforward, understanding how a mastering engineer perceives music and uses their technology is far from simple since there is as much art as there is science to their craft. Therefore, is this a process that can be devolved to AI?

That is the assumption behind LANDR’s online mastering service which allows you to upload a finished track for mastering. Once it is processed, you are given the option to choose from three style settings (Warm, Balanced, Open) and three levels of loudness (Low, Medium, High), with a master/original toggle to compare the changes made.

I uploaded a recent composition to test it. The result was an improvement on the unmastered track, but the limited options to modify it gave the feeling of a one-size-fits-all approach, inadequate for those who intend to carefully shape their musical creations at every stage of production. However, this might not be an issue for people on lower-budget projects, or those who intend to simply and quickly improve their tracks for quick release.

In a desire to understand the AI technology I searched for more precise details, and while the company says that ‘AI isn’t just a buzzword for us’ I could only find a quote that does little to describe how the technology actually works.

Our legendary, patented mastering algorithm thoroughly analyzes tracks and customizes the processing to create results that sound incredible on any speaker.

While LANDR’s tool is useful for quick and cheap mastering, it feels constrained and artistically unrewarding if you want something more specific. The interface also feels like it limits the potential of the technology. Why not allow text prompts such as: “cut the low-end rumble, brighten the high end, and apply some subtle vintage reverb and limiting”.

Fastverb / Focusrite

Unlike mastering, reverb is an effect rather than a general skill or profession, making it potentially simpler to devolve aspects of it to AI. Focusrite’s Fastverb reverb effect uses AI to analyse your audio before prescribing certain settings for you based on this, which you can then go on to tweak. The company is vague about how their AI technology works, simply stating.

FAST Verb’s AI is trained on over half a million real samples, so you’ll never need to use presets again.

I use the plugin on a recent composition. The results were subtle but an improvement. I adjusted some of the settings and it sounded better. Overall, I had the impression of a tasteful reverb that would work with many styles of music.

Did the AI help significantly in arriving at the desired effect? It is hard to say. I would assume for someone with very limited experience using such tools, yes, but without someone confident with an effect, I doubt it saves much time at all.

I am aware however there is the potential for snobbery here. After all, if a podcaster can add a decent reverb to their show or a guitarist can add some presence to their recording easily, that’s no bad thing. They can if they want go on to learn more about these effects and fine-tune them themselves. For this reason purpose, it represents a useful tool.

Overview

LANDR’s Mastering service and Focusrite’s Fastverb are professional tools that I hope readers of this article will be tempted to try. However, while there is clearly automation at work, how the AI technology works is unclear. If the term AI is used to market tools, there should be clarification of what exactly it is — otherwise one might as well just write ‘digital magic’. By contrast, Google’s Tone Transfer have made their code open source, as well as describing in detail how they use machine learning, and the people involved in training the models.

I expect that the tools that attempt to speed up or improve existing processes, such as mastering and applying reverb, will have the effect of lowering the barrier to entry into audio engineering, but I have yet to see evidence it will improve it. In fact, it could degrade and homogenise audio engineering by encouraging people to work faster but with less skill and care.

By contrast, the machine learning algorithms that Googe, Harmonai, Neutone, and others are working on, could create meaningful change. They are not mature technologies, but there is the seed of something profound in them. The ability to completely transform the sounds of music while preserving the performance and the potential to bring the voice to the forefront of computer music could prove to be genuinely revolutionary.

What follows from the collapse of NFTs?

Dom Aversano

Almost a quarter of a century after Napster fired a torpedo into the record industry one might have expected stability to have returned, but the turmoil continues well into the new century without any signs of resolution.

The story is familiar. MP3 collections never felt like record collections, making them ripe to be superseded by full-catalogue music streaming. Streaming is unprofitable for the companies selling it and unsustainable for the musicians on it, so in a bid to save themselves, not music, the platforms are now transforming into rivers of algorithmically recommended muzak. Ironically, the oldest medium is in the healthiest state, vinyl, and while it is inspiring to know people still go out and buy records, it does not help solve the problem of digital music.

Given this context, it was always tempting to see NFTs — or non-fungible tokens — as the saviour of digital music. But with Sam Bankman-Fried now standing on trial and 95% of NFTs estimated to be worthless we should be asking, what went wrong?

It is beyond the scope of this article to explain what NFTs are — which has been done well elsewhere — but what can be said is the heavy nomenclature they carry can make it feel impenetrable and confusing: you have blockchain, minting, wallets, cryptocurrency, drops, Bitcoin, Metaverse, Web 3, smart contracts etc. The time required to make sense of this — much like an NFT — is a luxury few can afford, providing a wall of obscurantism that imbues the culture with an aura of mystique and intellectualism.

My experience took me down a winding path. Initially, I found NFTs interesting, as they seemed like an innovative method for digital ownership that could help fund the creation of new music and provide fans with a strong connection to their favourite artists, but as my research accumulated their appeal steadily diminished. A combination of too-good-to-be-true promises and scammy behaviour made it seem murky, if not at times actively sinister.

While I am not closed off to the possibility of something valuable emerging from this world (for instance, smart contracts seem genuinely interesting) based on the evidence, NFTs were always doomed to fail.

Here is why.

  1. The torrent of terminology in this culture makes it easy to be blinded by the science and lose sight of the obvious — for instance, cryptocurrencies, despite the name, are not currencies. There is barely a thing on Earth you can buy with crypto. It is actually an asset untethered to economic activity, or simpler yet, an elaborate gambling token. Just as nobody wants to appear a philistine for not appreciating a certain art form, nobody wants to feel like a Luddite for not understanding a particular technology, but spend your evenings and weekends dispassionately breaking down the terminology and you’ll find little of substance remains.

  2. Most people try to understand cryptocurrency in a purely technical sense and ignore the sociological of its emergence. Bitcoin arose shortly after the 2008 financial crisis when mistrust of banking was at an all-time high. At this time having a so-called currency circumventing banks was music to people’s ears, and the Hollywood superhero manner in which Bitcoin entered the world through a mysterious unknown figure called Satoshi Nakamoto only added to its anarcho-utopian appeal.

  3. Blockchain sounds cooler than it is. Some blockchains create huge environmental damage, have very long transaction times, and are vulnerable to privacy breaches and theft. If you lose your password to your digital wallet or if it falls into someone else’s hands you may lose everything, without any recourse to institutional support or insurance. Most concerning of all, far from being a tool for honesty and transparency, cryptocurrency is regularly used by organised criminals as a tool for money laundering. For these reasons, blockchain has been referred to at various points as ‘a solution in search of a problem’.

  4. Experts have much less faith in cryptocurrency than the public. An economist who famously predicted the 2007–08 subprime mortgage crisis, Nouriel Roubini, called crypto ‘a scam’ and a ‘Ponzi scheme’ that preys on young people, people on lower income, and minorities, and advises people to ‘stay away’, referring to those who run the industry as ‘crooks’ that ‘literally belong in jail’.

Even if none of the above really dents your belief in the validity of cryptocurrencies/NFTs/blockchains, there is a gaping flaw that is impossible to ignore.

NFTs have no intrinsic value.

I can put a photo of the Taj Mahal on a blockchain and link it to you, but that doesn’t mean you own a brick of it.

Writer and programmer Stephen Dhiel, who is a vociferous critic of cryptocurrencies, offered the following analogy about NFTs in a Twitter/X thread.

There is one comparable market to NFTs: The Star Naming Market (…) Back in the 90s some entrepreneurs found you could convince the public to buy “rights” to name yet-unnamed stars after their loved ones by selling entries in an unofficial register (…) You’d buy the “rights” to a name [sic] the star and they’d send you a piece of paper claiming that you were now the owner of said star. Nothing was actually done in this transaction, you simply paid someone to update a register about a ball of plasma millions of light years away. (…) NFTs are the evolution of this grift in a more convoluted form. Instead of allegedly buying a star, you’re allegedly buying a JPEG from an artist. Except you’re not buying the image, you’re buying a digitally signed URL to the image. 

With NFTs now largely worthless, it’s hard to argue with Dhiel’s analysis. So where does this leave us?

Few genuinely innovative ideas remain, but a company called JKBX has proposed that people can buy royalty shares of their favourite musicians’ songs. The problem is, even if it worked, would it be healthy to have fans treating their favourite artists’ songs as investments? Would listening to All You Need is Love feel the same if you were waiting for your share of a royalty payment to come through? Is turning music into a weird stock market for royalties really the best thing we can dream up?

After nearly a quarter of a century of unsuccessfully trying to resurrect the 20th-century music recording industry for the 21st-century, perhaps it is time to ask, was this ever the right goal? MP3s, streaming, and NFTs, did not balance the boat, which still rocks about aimlessly on stormy seas.

Perhaps the original goal was never ambitious or imaginative enough, after all, why resurrect an old method of distributing music when you could create a new one? NFTs were attractive to people for many reasons, but a major one was they promised a new internet culture — Web 3, metaverse etc. — that could offer ordinary people economic dignity. That people found this appealing is grounds for hope, as it demonstrates there is an appetite for a radical departure from the stagnant and centralised world of the social media empires.

The question that remains is: can we imagine it and build it? And if not now, when? If music wishes to remain a relevant art form, it can’t afford another quarter-century of floundering.

Do you have thoughts on what you have read? If so, please leave your comments below.

Further information on cryptocurrency/NFTs/blockchain

The Missing Crypto Queen — Podcast by investigative journalist Jamie Bartlett

The Case Against Crypto — Essay by programmer Stephen Diehl

Crypto is dead — Debate between Yanis Varoufakis & Viktor Tábori