Steve Reich’s exploration of technology through music

Dom Aversano

Photo by Peter Aidu

New York composer Steve Reich did not just participate in the creation of a new style of classical music, he helped establish a new kind of composer. Previously, the word composer evoked an archetype of a quill-wielding child prodigy who had composed several symphonies before adulthood — finding perhaps its purest embodiment in the example of Amadeus Mozart — whereas Reich represented a composer who gradually and determinedly developed their talent in a more relatable manner. At the same age that Mozart was on his deathbed composing his Requiem, Reich was struggling to establish himself in New York, driving taxis to make ends meet.

A key source of Reich’s inspiration was atypical of the classical music tradition, in which composers tended to draw inspiration from nature, religion, romantic love, classical literature, and other art forms; by contrast, Reich’s career was ignited by ideas he derived from electronic machines.

In what is now musical folklore, the young composer set up two tape recorders in his home studio with identical recordings of the Pentecostal preacher Brother Walter proclaiming ‘It’s gonna rain’. Reich pressed play on both machines and to his astonishment found the loops were perfectly synchronised. That initial synchronisation then began to drift as one machine played slightly faster than the other, causing the loops to gradually move out of time, thereby giving rise to a panoply of fascinating acoustic and melodic effects that would be impossible to anticipate or imagine without the use of a machine. The experiment formed the basis for Reich’s famous composition It’s Gonna Rain and established the technique of phasing (I have written a short guide to Reich’s three forms of phasing beneath this article).

While most composers would have considered this a curious home experiment and moved on, Reich, ever the visionary, sensed something deeper that formed the basis for an intense period of musical experimentation lasting almost a decade. In a video explaining the creation of the composition, It’s Gonna Rain, he describes the statistical improbability of the two tape loops having been aligned.

And miraculously, you could say by chance, you could say by divine gift, I would say the latter, but you know I’m not going to argue about that, the sound was exactly in the centre of my head. They were exactly lined up.

To the best of my knowledge, it is the first time in classical music that someone attributed intense or divine musical inspiration to an interaction with an electronic machine. How one interprets the claim of divinity is irrelevant, the significant point is it demonstrates the influence of machines on modern music not simply as a tool, but as a fountain of ideas and profound inspiration.

In a 1970 interview with fellow composer Michael Nyman, Reich described his attitude and approach to the influence of machines on music.

People imitating machines was always considered a sickly trip; I don’t feel that way at all, emotionally (…) the kind of attention that kind of mechanical playing asks for is something we could do with more of, and the “human expressive quality” that is assumed to be innately human is what we could do with less of now.

While phasing became Reich’s signature technique, his philosophy was summed up in a short and fragmentary essay called Music as a Gradual Process. It contained insights into how he perceived his music as a deterministic process, revealed slowly and wholly to the listener.

I don’t know any secrets of structure that you can’t hear. We all listen to the process together since it’s quite audible, and one of the reasons it’s quite audible is because it’s happening extremely gradually.

Despite the clear influence of technology on Reich’s work, there also exists an intense criticism of technology that clearly distinguishes his thinking from any kind of technological utopianism. For instance, Reich has consistently been dismissive of electronic sounds and made the following prediction in 1970.

Electronic music as such will gradually die and be absorbed into the ongoing music of people singing and playing instruments.

His disinterest in electronic sounds remains to this day, and with the exception of the early work Pulse Music (1969), he has never used electronically synthesised sounds. However, this should not be confused with a sweeping rejection of modern technology or a purist attitude towards traditional instruments. Far from it.

Reich was an early adopter of audio samplers, using them to inset short snippets of speech and sounds into his music from the 1980s onwards. A clear demonstration of this can be found in his celebrated work Different Trains (1988). The composition documents the long train journeys Reich took between New York and Los Angeles from 1938 to 1941 when travelling between his divorced parents. He then harrowingly juxtaposed this with the train journeys happening at the same time in Europe, where Jews were being transported to death camps.

For the composition, Reich recorded samples of his governess who accompanied him on these journeys, a retired pullman porter who worked on the same train line, and three holocaust survivors. He transcribed their natural voice melodies and used them to derive melodic material for the string quartet that accompanies the sampled voices. This technique employs technology to draw attention to minute details of the human voice, that are easily missed without this fragmentary and repetitive treatment. As with Reich’s early composition, It’s Gonna Rain, it is a use of technology that emphasises and magnifies the humanity in music, rather than seeking to replace it.

Having trains act as a source of musical and thematic inspiration demonstrates, once again, Reich’s willingness to be inspired by machines, though he was by no means alone in this specific regard. There is a rich 20th-century musical tradition of compositions inspired by trains, including works such as jazz composer Duke Ellington’s Take the A Train, Brazilian composer Heitor Villa Lobos’s The Little Train of the Caipira, and the Finnish composer Kaija Saariaho’s Stilleben.

Reich’s interrogation of technology finally reaches its zenith in his large-scale work Three Tales — an audio-film collaboration with visual artist Beryl Korot. It examines three technologically significant moments of the 20th century: The Hindenburg disaster, the atom bomb testing at Bikini, and the cloning of Dolly the sheep. In Reich’s words, they concern ‘the physical, ethical, and religious nature of the expanding technological environment.’ As with Different Trains, Reich recorded audio samples of speech to help compose the music, this time using the voices of scientists and technologists such as Richard Dawkins, Jaron Lanier, and Marvin Minsky.

These later works have an ominous, somewhat apocalyptic feel, hinting at the possibility of a dehumanised and violent future, yet while maintaining a sense of the beauty and affection humanity contains. Throughout his career, Reich has used technology as both a source of inspiration and a tool for creation in a complicated relationship that is irreducible to sweeping terms like optimistic or pessimistic. Instead, Reich uses music to reflect upon some of the fundamental questions of our age, challenging us to ask ourselves what it means to be human in a hi-tech world.

 


A short guide to three phasing techniques Reich uses

There are three phasing techniques that I detect in Steve Reich’s early music which I will briefly outline.

First is a continuous form of phasing. A clear demonstration of this is in the composition It’s Gonna Rain (1965). With this phasing technique, the phase relationship between the two voices is not measurable in normal musical terms (e.g., ‘16th notes apart’ etc) but exists in a state of continuous change making it difficult to measure at any moment. An additional example of this technique can be heard in the composition Pendulum Music.

The second is a discrete form of phasing. A clear demonstration of this is the composition Clapping Music (1972). With this phasing technique, musicians jump from one exact phase position to another without any intermediary steps, making the move discrete rather than gradual. Since the piece is in a time cycle of 12 there are the same number of possible permutations, each of which is explored in the composition, thereby completing the full phase cycle.

The third is a combination of continuous and discrete phasing. A clear demonstration of this is Piano Phase (1967). With this phasing technique, musicians shift gradually from one position to another, settling in the new position for some time. In Piano Phase one musician plays slightly faster than the other until they reach their new phase position which they settle into for some time before making another gradual shift to another phase position. An additional example of this technique can be heard in the composition Drumming.

Music Hackspace is running an online workshop Making Generative Phase Music with Max/MSP Wednesday January 17th 17:00 GMT 

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.

Is music writing still relevant?

Dom Aversano

I recently listened to a podcast series by Sean Adams from Drowned in Sound which discusses the decline of music journalism as a profession (not to be conflated with music writing as a whole). It caused me to reflect on why I consider music writing valuable and important, even in an age where anyone can easily publish their thoughts. Why do the stories of music matter, and what would happen if they dissolved into digital chatter?

There’s a quote that is often wheeled to demonstrate the apparent futility of writing about music — one I found objectionable long before I ever considered music writing.

Writing about music is like dancing about architecture

This is attributed to all sorts of people: Frank Zappa, Laurie Anderson, and Elvis Costello. Probably none of them said it, and in the end, it doesn’t matter. Get a group of musicians together and they will talk about music for hours — so if talking is permitted, why is writing not? Both articulate thought, and as an experienced writer once told me, writing is just thinking clearly.

History is full of composers who wrote. Aaron Copland was a prolific writer, as was Arnold Schoenberg. Before them you had 19th-century composers writing essays on music in a similar way to how 21st-century musicians use social media. Some infamously, such as the master of self-promotion Richard Wagner, who filled an entire book with anti-Semitic bile.

There is no lack of writing in contemporary music culture either. Composers such as John Adams, Philip Glass, Errollyn Wallen and Gavin Bryars have all written autobiographies. Steve Reich recently published Conversations, a book that transcribes his conversations with various collaborators. In South India, the virtuoso singer and political activist T M Krishna is a prolific writer of books and articles on musicology and politics.

Given that music writing has a long and important history, the question that remains is: does it have contemporary relevance, or could the same insights be crowdsourced from the vast amount of information online? In short, do professional opinions on music still matter?

Unsurprisingly, I believe yes.

I do not believe that professional opinion should be reserved only for science, politics, and economics, but should apply to music and the arts too, and if we are truly no longer willing to fund artistic writing, what does this say about ourselves and our culture? Is music not a serious part of human existence?

Even if musicians at times feel antagonised by professional critics, they ultimately benefit from having experts document and analyse their art. This is not to suggest professionals cannot get it wrong; they most certainly can, as exemplified by this famous example where jazz criticism went seriously awry.

In the Nov. 23, 1961, DownBeat, Tynan wrote, “At Hollywood’s Renaissance Club recently, I listened to a horrifying demonstration of what appears to be a growing anti-jazz trend exemplified by these foremost proponents [Coltrane and Dolphy] of what is termed avant-garde music.

“I heard a good rhythm section… go to waste behind the nihilistic exercises of the two horns.… Coltrane and Dolphy seem intent on deliberately destroying [swing].… They seem bent on pursuing an anarchistic course in their music that can but be termed anti-jazz.”

Despite this commentary being way off the mark, it also acts as a historical record for how far ahead of the critics John Coltrane and Eric Dolphy were. Had the critics not documented their opinion, we would not know how this music — which sounds relatively tame by today’s standards — was initially received by some as ‘nihilistic’ and ‘anarchistic’. It is easy to point out the failure of the critics, but it also highlights how advanced Coltrane and Dolphy were.

Conversely, an example where music writing resonated with the Zeitgeist was Alex Ross’s book The Rest is Noise. This concise, entertaining history of 20th-century classical music was so influential it formed the curation for a year-long festival of music in London’s Southbank Centre. The event changed the artistic landscape of the city by making contemporary classical music accessible and intelligible while demonstrating it could sell out big concert halls. In essence, Ross did what composers had largely failed to do in the 20th century — he brought the public up-to-date and provided a coherent narrative for a century that felt confusing to many.

The peril of leaving this to social media was demonstrated by this year’s biggest-grossing film, Barbie. For the London press preview, social media influencers were given preference over film critics and told, ‘Feel free to share your positive feelings about the film on Twitter after the screening.’ I expected to find the film challenging and provocative but encountered something that felt bland, obvious, and devoid of nuance. I potentially got caught up in a wave of hype that used unskilled influencers and sidelined professional critics.

The world is undoubtedly changing at a rapid pace, and music writing must keep up with it. Some of what has disappeared, I do not miss, such as the ‘build them up to tear them down’ attitude of certain music journalism during the print era. Neither do I miss journalists being the gatekeepers of culture. For all the Internet’s faults, the fact that anyone can publish their work online and develop an audience without the need for an intermediary remains a marvel of the modern era.

However, as with all revolutions, there is a danger of being overzealous about the new at the expense of the old. Music is often referred to metaphorically as an ecosystem, yet given that we are a part of nature, surely it is an accurate description. Rip out large chunks of that ecosystem and the consequences may be that everything within it suffers.

For this reason, far from believing that writing about music is like dancing about architecture, I consider it a valuable way to make sense of and celebrate a beautiful art form. If that writing disappears, we will all be poorer for it.

So in the spirit of supporting contemporary music writers here is a non-exhaustive list of some writers whom I have benefitted from reading.

Alex Ross / The New Yorker

An authority on contemporary classical music and authour of The Rest is Noise. 

Philip Sherborne / Pitchfork & Substack

Experienced journalist specialising in experimental electronic music. 

Dr Leah Broad / Substack

A classical musical expert who analyses music through a feminist perspective. The authour of Quartet. 

Ted Gioia / Substack

Outspoken takes on popular culture and music from an ex-jazz pianist.  Authour of multiple books. 

Kate Molleson / BBC Radio

Scottish classical music critique who writes about subjects such as the Ethiopian nun/pianist/composer Emahoy Tsegué-Maryam Guèbrou.

T M Krishna / Various

One of the finest Carnatic music singers of his generation, a mountain climber, and a polemical left-wing voice in Indian culture. 

 

Competition – Win one year’s free membership to Music Hackspace

Dom Aversano

We are giving away a year’s free membership – to enter, all you have to do is leave a comment on this page about at least one composer or musician who has greatly influenced your approach to computer music.

We want to know two things.

  1. How has their music affected or influenced you?

  2. An example of a piece of their music you like, and a short description of why.

Anyone who completes the above will be entered into the competition on an equal basis (you are welcome to list more than one person, but this will not improve your chances of winning) with the winner assigned at random and announced on Saturday 4th of November via the Music Hackspace newsletter.

To get the ball rolling, I will provide two examples.

Kaija Saariaho / Vers le blanc

I arrived somewhat late to Kaija Saariaho’s music, attending my first live performance of her music two years prior to her death this year, yet despite this, her music has greatly influenced me in the short time I have known it.

Although I have not heard the piece in full (since it has never been released) the simple 1982 electronic composition by Saariaho, Vers le blanc, captured my imagination.

The composition is a 15-minute glissando from one tone cluster (ABC) to another (DEF). Saariaho used electronic voices to produce this. The composition raises questions about what is perceptible. For instance, can the change in pitch be heard from moment to moment? Can it be sensed over longer time periods?

The piece made me question what can be considered music. Are they notes if they never fix on a pitch? can such a simple process over 15 minutes be artistically enjoyable to listen to? what would be the ideal circumstance to listen to such music? I experienced this music partly as an artistic object of study and meditation and partly as a philosophical provocation. 

Burial / Come Down to Us

Burial’s idiosyncratic approach to technology gives rise to a unique sound. He famously stated in a 2006 interview that he used Soundforge to create his music, without the use of any multitrack sequencing or quantisation. This stripped-down use of technology gives the music an emotional directness and a more human feel.

I find his track Come Down to Us particularly inspiring. At 13 minutes long it uses a two-part binary form for the structure. The composition uses audio samples from a transgender person, and it was only after a few years of listening that it occurred to me that the form might describe the subject. At 7 minutes the entire mood and sound of the track changes from apprehensive to triumphant, potentially describing a person undergoing — or having undergone — a psychological or physical transition. Released in 2013, this was long before the divisive culture wars and undoubtedly intended simply as an artistic exploration. 

Leave your comment below to enter the competition. Please refer to the guidelines above. The winner will be announced on Saturday 4th of November via the Music Hackspace newsletter. 

Can AI help us make humane and imaginative music?

Dom Aversano

There is a spectrum upon which AI music software exists. On one end are programs which create entire compositions, and on the other are programs that help people create music. In this post I will focus on the latter part of the spectrum, and ask the question, can AI help us compose and produce music in humane and imaginative ways? I will explore this question through a few different AI music tools.

Tone Transfer / Google

For decades the dominance of keyboard interaction has constrained computer music. Keyboards elegantly arrange a large number of notes but limit the control of musical parameters beyond volume and duration. Furthermore, with the idiosyncratic arrangement of a keyboard’s notes, it is hard to work — or even think — outside of the 12-note chromatic scale. Even with the welcome addition of pitch modulation wheels and microtonal pressure-sensitive keyboards such as Roli’s fascinating Seaboard, keyboards still struggle to express the nuanced pitch and amplitude modulations quintessential to many musical cultures.

For this reason, Magenta’s Tone Transfer may represent a potentially revolutionary change in computer music interaction. It allows you to take a sound or melody from one instrument and transform it into a completely different-sounding instrument while preserving the subtleties and nuances of the original performance. A cello melody can be transformed into a trumpet melody, the sound of birdsong into fluttering flute sounds, or a sung melody converted into a number of traditional concert instruments. It feels like the antidote to autotune, a tool that captures the nuance, subtly, and humanity of the voice, while offering the potential to transform it into something quite different.

In practice, the technology falls short of its ambitions. I sang in a melody and transformed it into a flute sound, and while my singing ability is unlikely to threaten the reputation of Ella FitzGerald, the flute melody that emerged sounded like the flautist was drunk. However, given the pace at which machine learning is progressing, one can expect it to be much more sophisticated in the coming years, and I essentially regard this technology as an early prototype.

Google has admirably made the code open source and the musicians who helped train the machine learning algorithms are prominently credited for their work. You can listen to audio snippets of the machine learning process, and hear the instrument evolve in complexity after 1 hour, 3 hours, and 10 hours of learning.

It is not just Google developing this type of technology — groups like Harmonai and Neutone doing similar things and any one of them stands to transform computer music interaction, by anchoring us back into the most universal instrument, the human voice.

Mastering / LANDR

Although understanding how mastering works is relatively straightforward, understanding how a mastering engineer perceives music and uses their technology is far from simple since there is as much art as there is science to their craft. Therefore, is this a process that can be devolved to AI?

That is the assumption behind LANDR’s online mastering service which allows you to upload a finished track for mastering. Once it is processed, you are given the option to choose from three style settings (Warm, Balanced, Open) and three levels of loudness (Low, Medium, High), with a master/original toggle to compare the changes made.

I uploaded a recent composition to test it. The result was an improvement on the unmastered track, but the limited options to modify it gave the feeling of a one-size-fits-all approach, inadequate for those who intend to carefully shape their musical creations at every stage of production. However, this might not be an issue for people on lower-budget projects, or those who intend to simply and quickly improve their tracks for quick release.

In a desire to understand the AI technology I searched for more precise details, and while the company says that ‘AI isn’t just a buzzword for us’ I could only find a quote that does little to describe how the technology actually works.

Our legendary, patented mastering algorithm thoroughly analyzes tracks and customizes the processing to create results that sound incredible on any speaker.

While LANDR’s tool is useful for quick and cheap mastering, it feels constrained and artistically unrewarding if you want something more specific. The interface also feels like it limits the potential of the technology. Why not allow text prompts such as: “cut the low-end rumble, brighten the high end, and apply some subtle vintage reverb and limiting”.

Fastverb / Focusrite

Unlike mastering, reverb is an effect rather than a general skill or profession, making it potentially simpler to devolve aspects of it to AI. Focusrite’s Fastverb reverb effect uses AI to analyse your audio before prescribing certain settings for you based on this, which you can then go on to tweak. The company is vague about how their AI technology works, simply stating.

FAST Verb’s AI is trained on over half a million real samples, so you’ll never need to use presets again.

I use the plugin on a recent composition. The results were subtle but an improvement. I adjusted some of the settings and it sounded better. Overall, I had the impression of a tasteful reverb that would work with many styles of music.

Did the AI help significantly in arriving at the desired effect? It is hard to say. I would assume for someone with very limited experience using such tools, yes, but without someone confident with an effect, I doubt it saves much time at all.

I am aware however there is the potential for snobbery here. After all, if a podcaster can add a decent reverb to their show or a guitarist can add some presence to their recording easily, that’s no bad thing. They can if they want go on to learn more about these effects and fine-tune them themselves. For this reason purpose, it represents a useful tool.

Overview

LANDR’s Mastering service and Focusrite’s Fastverb are professional tools that I hope readers of this article will be tempted to try. However, while there is clearly automation at work, how the AI technology works is unclear. If the term AI is used to market tools, there should be clarification of what exactly it is — otherwise one might as well just write ‘digital magic’. By contrast, Google’s Tone Transfer have made their code open source, as well as describing in detail how they use machine learning, and the people involved in training the models.

I expect that the tools that attempt to speed up or improve existing processes, such as mastering and applying reverb, will have the effect of lowering the barrier to entry into audio engineering, but I have yet to see evidence it will improve it. In fact, it could degrade and homogenise audio engineering by encouraging people to work faster but with less skill and care.

By contrast, the machine learning algorithms that Googe, Harmonai, Neutone, and others are working on, could create meaningful change. They are not mature technologies, but there is the seed of something profound in them. The ability to completely transform the sounds of music while preserving the performance and the potential to bring the voice to the forefront of computer music could prove to be genuinely revolutionary.

What follows from the collapse of NFTs?

Dom Aversano

Almost a quarter of a century after Napster fired a torpedo into the record industry one might have expected stability to have returned, but the turmoil continues well into the new century without any signs of resolution.

The story is familiar. MP3 collections never felt like record collections, making them ripe to be superseded by full-catalogue music streaming. Streaming is unprofitable for the companies selling it and unsustainable for the musicians on it, so in a bid to save themselves, not music, the platforms are now transforming into rivers of algorithmically recommended muzak. Ironically, the oldest medium is in the healthiest state, vinyl, and while it is inspiring to know people still go out and buy records, it does not help solve the problem of digital music.

Given this context, it was always tempting to see NFTs — or non-fungible tokens — as the saviour of digital music. But with Sam Bankman-Fried now standing on trial and 95% of NFTs estimated to be worthless we should be asking, what went wrong?

It is beyond the scope of this article to explain what NFTs are — which has been done well elsewhere — but what can be said is the heavy nomenclature they carry can make it feel impenetrable and confusing: you have blockchain, minting, wallets, cryptocurrency, drops, Bitcoin, Metaverse, Web 3, smart contracts etc. The time required to make sense of this — much like an NFT — is a luxury few can afford, providing a wall of obscurantism that imbues the culture with an aura of mystique and intellectualism.

My experience took me down a winding path. Initially, I found NFTs interesting, as they seemed like an innovative method for digital ownership that could help fund the creation of new music and provide fans with a strong connection to their favourite artists, but as my research accumulated their appeal steadily diminished. A combination of too-good-to-be-true promises and scammy behaviour made it seem murky, if not at times actively sinister.

While I am not closed off to the possibility of something valuable emerging from this world (for instance, smart contracts seem genuinely interesting) based on the evidence, NFTs were always doomed to fail.

Here is why.

  1. The torrent of terminology in this culture makes it easy to be blinded by the science and lose sight of the obvious — for instance, cryptocurrencies, despite the name, are not currencies. There is barely a thing on Earth you can buy with crypto. It is actually an asset untethered to economic activity, or simpler yet, an elaborate gambling token. Just as nobody wants to appear a philistine for not appreciating a certain art form, nobody wants to feel like a Luddite for not understanding a particular technology, but spend your evenings and weekends dispassionately breaking down the terminology and you’ll find little of substance remains.

  2. Most people try to understand cryptocurrency in a purely technical sense and ignore the sociological of its emergence. Bitcoin arose shortly after the 2008 financial crisis when mistrust of banking was at an all-time high. At this time having a so-called currency circumventing banks was music to people’s ears, and the Hollywood superhero manner in which Bitcoin entered the world through a mysterious unknown figure called Satoshi Nakamoto only added to its anarcho-utopian appeal.

  3. Blockchain sounds cooler than it is. Some blockchains create huge environmental damage, have very long transaction times, and are vulnerable to privacy breaches and theft. If you lose your password to your digital wallet or if it falls into someone else’s hands you may lose everything, without any recourse to institutional support or insurance. Most concerning of all, far from being a tool for honesty and transparency, cryptocurrency is regularly used by organised criminals as a tool for money laundering. For these reasons, blockchain has been referred to at various points as ‘a solution in search of a problem’.

  4. Experts have much less faith in cryptocurrency than the public. An economist who famously predicted the 2007–08 subprime mortgage crisis, Nouriel Roubini, called crypto ‘a scam’ and a ‘Ponzi scheme’ that preys on young people, people on lower income, and minorities, and advises people to ‘stay away’, referring to those who run the industry as ‘crooks’ that ‘literally belong in jail’.

Even if none of the above really dents your belief in the validity of cryptocurrencies/NFTs/blockchains, there is a gaping flaw that is impossible to ignore.

NFTs have no intrinsic value.

I can put a photo of the Taj Mahal on a blockchain and link it to you, but that doesn’t mean you own a brick of it.

Writer and programmer Stephen Dhiel, who is a vociferous critic of cryptocurrencies, offered the following analogy about NFTs in a Twitter/X thread.

There is one comparable market to NFTs: The Star Naming Market (…) Back in the 90s some entrepreneurs found you could convince the public to buy “rights” to name yet-unnamed stars after their loved ones by selling entries in an unofficial register (…) You’d buy the “rights” to a name [sic] the star and they’d send you a piece of paper claiming that you were now the owner of said star. Nothing was actually done in this transaction, you simply paid someone to update a register about a ball of plasma millions of light years away. (…) NFTs are the evolution of this grift in a more convoluted form. Instead of allegedly buying a star, you’re allegedly buying a JPEG from an artist. Except you’re not buying the image, you’re buying a digitally signed URL to the image. 

With NFTs now largely worthless, it’s hard to argue with Dhiel’s analysis. So where does this leave us?

Few genuinely innovative ideas remain, but a company called JKBX has proposed that people can buy royalty shares of their favourite musicians’ songs. The problem is, even if it worked, would it be healthy to have fans treating their favourite artists’ songs as investments? Would listening to All You Need is Love feel the same if you were waiting for your share of a royalty payment to come through? Is turning music into a weird stock market for royalties really the best thing we can dream up?

After nearly a quarter of a century of unsuccessfully trying to resurrect the 20th-century music recording industry for the 21st-century, perhaps it is time to ask, was this ever the right goal? MP3s, streaming, and NFTs, did not balance the boat, which still rocks about aimlessly on stormy seas.

Perhaps the original goal was never ambitious or imaginative enough, after all, why resurrect an old method of distributing music when you could create a new one? NFTs were attractive to people for many reasons, but a major one was they promised a new internet culture — Web 3, metaverse etc. — that could offer ordinary people economic dignity. That people found this appealing is grounds for hope, as it demonstrates there is an appetite for a radical departure from the stagnant and centralised world of the social media empires.

The question that remains is: can we imagine it and build it? And if not now, when? If music wishes to remain a relevant art form, it can’t afford another quarter-century of floundering.

Do you have thoughts on what you have read? If so, please leave your comments below.

Further information on cryptocurrency/NFTs/blockchain

The Missing Crypto Queen — Podcast by investigative journalist Jamie Bartlett

The Case Against Crypto — Essay by programmer Stephen Diehl

Crypto is dead — Debate between Yanis Varoufakis & Viktor Tábori

How to design a music installation – an interview with Tim Murray-Browne (part 2)

Dom Aversano

How to design a music installation - an interview with Tim Murray-Browne (part 2)

In the first part of this interview, artist Tim Murray-Browne discussed his approach to creating interactive installations, and the importance of allowing space for the agency of the audience with a philosophy that blurs the traditional artist/audience dichotomy in favour of a larger-scale collaboration.

In the second part of this interview, we discuss how artificial intelligence and generative processes could influence music in the near future and the potential social and political implications of this, before returning to the practical matters of advice on how to build an interactive music installation and get it seen and heard.

I recently interviewed the composer and programmer Robert Thomas who envisions a future in which music behaves in a more responsive and indeterminate manner, more resemblant to software than the wax cylinder recording that helped define 20th-century music. In this scenario, fixed recording could become obsolete. Is this how you see the future?

I think the concept of the recorded song is here to stay. In the same way, I think the idea of the gig and concert is here to stay. There are other things being added on top and it may become less and less relevant as time goes on. Just in the way that buying singles has become less relevant even though we still listen to songs. 

I think the most important thing is having a sense of personal connection and ownership. This comes back to agency, where I feel I’m expressing myself through the relationship with this music or belonging to a particular group or community. What I think a lot of musicians and people who make interactive music can get wrong is since they take such joy and pleasure in being creatively expressive, they think they can somehow give that joy to someone else without figuring out how to give them some kind of personal ownership of what they’re doing.

As musicians it’s tempting to think we can make a track and then create an interactive version, and that someone’s going to listen to that interactive version of my track and remix it live or change aspects of it, and have this personalised experience that it is going to be even better because they had creative agency over it. 

I think there’s a problem with that because you’re asking people to do some of the creative work but without the sense of authorship or ownership. I may be wrong about this because in video games you definitely come as an audience and explore the game and develop skill and a personal style that gives you a really personal connection to it. But games and music are very different things. Games have measurable goals to progress through, and often with metrics. Music isn’t like that. Music is like an expanse of openness. There isn’t an aim to make the perfect music. You can’t say this music is 85% good.

How do you see the future?

I agree with Robert in some sense, but where I think we’re going to see the song decline in relevance has less to do with artists creating interactive versions of their work and more to do with people using AI to completely appropriate and remix existing musical works. When those tools become very quick and easy to use I think we will see the song transform into a meme space instead. I don’t see any way to avoid that. I think there will be resistance, but it is inevitable.

In the AI space, there are some artists who are seeing this coming and trying to make the most of it. So instead of trying to stop people from using AI to rip off their work, they’re trying to get a cut of it. Like say, okay you can use my voice but you’ll give me royalties. I’ve done all of this work to make this voice, it’s become like a kind of recognizable cultural asset and I know I’m going to lose control of it, but I want some royalties and to own the quality of this vocal timbre

Is there a risk in deskilling, or even populism, in a future where anyone can make profound changes to another person’s creative work? The original intention of copyright law was to protect artists’ work from falling out of their hands financially and aesthetically. The supposed democratisation of journalism has largely defunded and deskilled an important profession and created an economy for much less skilled influencers and provocateurs. Might not the same happen to music?

The question of democratisation is problematic. For instance, democracy is good, but there are consequences when you democratise the means of production, particularly in the arts where a big part of what we’re doing is essentially showing off. Once the means of production are democratised, then those who have invested in the skills previously needed lose that capacity to define themselves through them. Instead, everyone can do everything and for this short while, because we’re used to these things being scarce, it suddenly seems like we’ve all become richer. Then pretty soon, we find we’re all in a very crowded room trying to shout louder and louder. It’s like we were in a gig and we took away the stage and now we’re all expecting to have the same status that the musician on the stage had.

I can see your concerns with that, but when it comes to music transforming from being a produced thing to being very quickly made with AI tools by people who aren’t professional. If you’re a professional musician there will still be winners and losers, and those winners and losers will in part be those who are good at using the tools. There will be those with some kind of artistic vision. And there’ll be those who are good at social media and networking, and good at understanding how to make things go viral. 

It’s not that different from how music is now. It takes more than musical talent to become a successful artist as a musician, you’ve got to build relationships with your fans, you have to do all of these other things which maybe you could get away with not doing so much in the past.

Let’s return to the original theme of what makes for a good installation. What advice would you give to someone in the same position now that you were in just over a decade ago when starting Cave of Sounds?

In 2012 when we started building Cave of Sounds Music Hackspace was a place for people to build things. This was fundamental for me. People there were making software and hardware and there was this sort of default attitude of ‘we built it, now we’re going to show somebody’. We’re going to get up in the front of the room and I’m going to talk to you about this thing, and maybe I’ll play some music on it.

I find the term installation problematic because it comes from this world of the art gallery and of having a space and doing something inside the space where it can’t necessarily just be reduced to a sculpture or something. Whereas, for me, it was just a useful word to describe a musical device where the audience is going to be actively interacting with it, rather than sitting down and watching a professional interact with it. So that shift from a musician on a stage to an audience participating in the work.

I don’t think it necessarily has to begin with a space. It needs a curiosity of interaction. Maybe I’m just projecting what I feel, but what I observed at Music Hackspace is people taking so much enjoyment in building things, and less time spent performing them. Some people really want to get up and perform as musicians. Some people really want to build stuff for the pleasure of building. 

How do you get an installation out into the world?

How to get exhibited is still an ongoing mystery to me, but I will say that having past work that has succeeded means people are more likely to accept new work based on a diagram and description. Generally, having a video of a piece makes it much more likely for people to want to show it. The main place things are shown is in festivals, more than galleries or museums. Getting work into a festival is a question of practical logistics: How many people are going to experience it and how much space and resources does it demand? And then festivals tend to conform to bigger trends – sometimes a bit too much I think as then they end up all showing quite similar works. When we made Cave of Sounds, DIY hacker culture and its connection to grassroots activism was in the air. Today, the focus is the environment, decolonisation, and social justice. Tomorrow there will be other things.

Then, there’s a lot of graft, and a lot of that graft is much easier when you’re younger than when you’re older. I don’t think I could go through the Cave of Sounds process today like I did back then. I’m very happy I did it back then.

What specifically about the Cave of Sounds do you think made it work?

The first shocking success of Cave of Sounds is that when we built it we had like a team of eight, and I had a very small fee because I was doing this artist residency, but everyone else was a volunteer on that project or collaborating artists, but unpaid. And we worked together for eight months to bring it together.

A lot of people came to the first meeting but from the second meeting, the people who turned up from that point forward were the eight people making the work who stuck through to the end. I think there’s something remarkable about that. Something about the core idea of the work really resonated with those people, and I think we got really lucky with them. And there was a community that they were embedded in as well. But the fact that everyone might made it to the end, just like shows that there was something kind of magical in the nature of the work and the context of that combination of people.

So a work like Cave Sounds was possible because we had a lot of people who were very passionate, and we had a diversity of skills, but we also had like a bit of an institutional name behind us. We had a small budget as well, but the budget was very small, and most of the budget did not pay for the work. The budget covered some of the materials, really, but a significant amount of labour went into that piece, and it came from people working for passion.

Do you have a dream project or a desire for something you would like to do in the future?

For the past few years I’ve been exploring how to use AI to interpret the moving body so that I can create physical interaction without introducing any assumptions about what kind of movement the body can make. So if I’m making an instrument by mapping movement sensors to sound, I’m not thinking ‘OK this kind of hand movement should make that kind of sound’ but instead training an AI on many hours of sensor data where I’m just moving in my own natural way and asking it ‘What are the most significant movements here?’

I’m slightly obsessed with this process. It’s giving me a completely different feeling when I interact with the machine, like my actions are no longer mediated by the hand of an interaction designer. Of course, I’m still there as a designer, but it’s like I’m designing an open space for someone rather than boxes of tools. I think there’s something profoundly political about this shift, and I’m drawn to that because it reveals a way of applying AI to liberate people to be individually themselves, rather than using it to make existing systems even more efficient at being controlling and manipulative which seems to be the main AI risk I think we’re facing right now. I could go on more as well – moving from the symbolic to the embodied, from the rational to the intuitive. Computers before AI were like humans with only the left side of the brain. I think they make humans lose touch with their embodied nature. AI adds in the right side, and some of the most exciting shifts I think will be in how we interact with computers as much as what those computers can do autonomously.

So far, I’ve been exploring this with dancers, having them control sounds in real-time but still being able to dance as they dance rather than dancing like they’re trapped inside a land of invisible switches and trigger zones. And in my latest interactive installation Self Absorbed I’ve been using it to explore the latent space of other AI models, so people can morph through different images by moving their bodies. But the dream project is to expand this into a larger multi-person space, a combined virtual and physical realm that lets people influence their surroundings in all kinds of inexplicable ways by using the body. I want to make this and see how far people can feel a sense of connection with each other through full-body interfaces that are too complicated to understand rationally but are so rich and sensitive to the body that you can still find ways to express yourself.

Cave of Sounds was created by Tim Murray-Browne, Dom Aversano, Sus Garcia, Wallace Hobbes, Daniel Lopez, Tadeo Sendon, Panagiotis Tigas, and Kacper Ziemianin with support from Music Hackspace, Sound and Music, Esmée Fairbairne Foundation, Arts Council England and British Council.

To find out more about Tim Murray-Browne you can visit his website or follow him on Substack, Instagram, Mastodon, or X.

A guide to seven powerful programs for music and visuals

Dom Aversano

What should I learn? A guide to seven powerful programs for music and visuals.

The British saxophonist Shabaka Hutchings described an approach to learning music that reduces it down to two tasks: the first is to know what to practise, and the second is to practise it. The same approach works for coding, and though it is a simple philosophy that does not necessarily make it easy. Knowing what to practise can feel daunting amid such a huge array of tools and approaches, making it all the more important to be clear about what you wish to learn so you can then devote yourself without doubt or distraction to the task of studying.

As ever the most important thing is not the tool but the skills, knowledge, and imagination of the person using it. However, nobody wants to attempt to hammer a nail into the wall with a screwdriver. Some programs are more suited to certain tasks than others, so it is important to have a sense of their strengths and weaknesses before taking serious steps into learning them.

What follows is a summary and description of some popular programs to help you navigate your way to what inspires you most, so you can learn with passion and energy.

Pure Data

Pure Data is an open-source programming language for audio and visual (GEM) coding that was developed by Miller Puckette in the mid-1990s. It is a dataflow language where objects are patched together using cords, in a manner appealing to those who like to conceptualise programs as a network of physical objects. 

Getting started in Pure Data is not especially difficult even without any programming experience, since it has good documentation and plenty of tutorials. You can build interesting and simple programs within days or weeks, and with experience, it is possible to build complex and professional programs.

The tactile and playful process of patching things together also represents a weakness of Pure Data, since once your programs become more advanced you need increasing numbers of patch cables, and dragging hundreds – or even thousands – of them from one place to another becomes monotonous work.

Cost: free

Introductory Tutorial 

Official Website

Max/MSP/Jitter and Max for Live

Max/MSP is Pure Data’s sibling, which makes it quite easy to migrate from one program to the other, but there are significant and important differences too. The graphical user interface (GUI) for Max is more refined and allows for organising patching chords in elegant ways that help mental clarity. With Max for Live you have Max built into Ableton – bringing together two powerful programs.

Max has a big community surrounding it in which you can find plenty of tutorials, Discord channels, and a vast library of instruments to pull apart. Just as Pure Data has GEM for visualisation Max has Jitter, in which you can create highly sophisticated visuals. All in all, this represents an incredibly powerful setup for music and visuals.

The potential downsides are that Max is paid, so if you’re on a small budget Pure Data might be better suited. It also suffers from the same patch cord fatigue as Pure Data, where you can end up attaching cords from one place to another in a repetitive manner.

Cost: $9.99 per month / $399 permanent licence or $250 for students and teachers

Introductory Tutorial

Official Website

SuperCollider

SuperCollider is an open-source language developed by James McCartney that was released in 1996, and a more traditional programming language than either Pure Data or Max. If you enjoy coding it is an immensely powerful tool where your imagination is the limit when it comes to sound design, since with as little as a single line of code you are capable of creating stunning musical outputs. 

However, SuperCollider is difficult, so if you have no programming experience expect to put in many hours before you feel comfortable. Its documentation is inconsistent and written in a way that sometimes assumes a high level of technical understanding. Thankfully, there is a generous and helpful online forum that is very welcoming to newcomers, so if you are determined to learn, do not be put off by the challenge.

An area that SuperCollider is lacking in comparison to Max and Pure Data is a sophisticated built-in environment for visuals, and although you can use it to create GUIs, they do not have the same elegance as in Max.

Cost: free

Introductory Tutorial 

Official website

TidalCycles

Though built from SuperCollider, TidalCycles is nevertheless much easier to learn. Designed for the creation of algorithmic music, it is popular in live coding or algorave music. The language is intuitive and uses music terminology in its syntax, giving people with an existing understanding of music an easy way into coding. There is a community built around it complete with Discord channels and an active community blog.

The downsides to TidalCycles are the installation is difficult, and it is a somewhat specialist tool that does not have as broad capabilities as the aforementioned programs.

Cost: free

Introductory Tutorial 

Official Websit

P5JS

P5JS is an open-source Javascript library that is a tool of choice for generative visual artists. The combination of a gentle learning curve and the ease of being able to run it straight from your browser makes it something easy to incorporate into one’s life, either as a simple tool for sketching out visual ideas or as something much more powerful that is capable of generating world-class works of art.

It is hard to mention P5JS without also mentioning Daniel Shiffmen, one of the most charismatic, humorous, and engaging programming teachers, who has rightly earned himself a reputation as such. He is the authour of a fascinating book called The Nature of Code which takes inspiration from natural systems, and like P5JS is open-source and freely available. 

Cost: free

Introductory Tutorial

Official Website

Tone.js

Like P5JS, Tone.js is also a Javascript library, and one that opens the door to a whole world of musical possibilities in the web browser. In the words of its creators it ‘offers common DAW (digital audio workstation) features like a global transport for synchronizing and scheduling events as well as prebuilt synths and effects’ while allowing for ‘high-performance building blocks to create your own synthesizers, effects, and complex control signals.’

Since it is web based one can get a feel for it by delving into some of the examples on offer

Cost: free

Introductory Tutorial

Official website

TouchDesigner

In TouchDesigner you can create magnificent live 3D visuals without the need for coding. Its visual modular environment allows you to patch together modules in intuitive and creative ways, and it is easy to input midi or OSC if you want to incorporate a new visual dimension to your music. To help learn there is an active forum, live meetups, and many tutorial videos on this site. While the initial stages of using TouchDesigner are not difficult, one can become virtuosic with the option of even writing your own code in the programming language Python. 

There is a showcase of work made using TouchDesigner on their website which gives you a sense of what it is capable of.

Cost: All features $2200 / pro version $600 / free for personal and non-commercial use. 

Introductory Tutorial

Official Website

Bishi: a journey in music & technology

Bishi‘s talk explores her journey in music & technology, stemming from her cultural roots, charting the steps between being a musician, composer & performer to founder and technologist. The talk will feature some live Sitar midi-mapping performance.

Singer, electronic rock- sitarist, Composer, producer and performer BISHI was born in London of Bengali heritage. A multi-instrumentalist, BISHI received musical training in both Hindustani and Western Classical styles, including the study of the sitar under Gaurav Mazumdar a senior disciple of Ravi Shankar.

She has written & recorded two albums, produced by Matthew Hardern: Nights at The Circus and Albion Voice. Bishi co-produced her third album, ‘Let My Country Awake,’ with Jeff Cook.

Bishi is the founder of WITCiH: The Women in Technology Creative Industries Hub, a platform elevating Women & Non-Binary in tech, through commissions, performances & podcasts. She fronted a documentary for BBC Radio 4 exploring the future of technology in music.

Bishi’s collaborations & commissions for the stage have included The London Symphony Orchestra, The Kronos Quartet, Yoko Ono’s Meltdown, The Science Gallery, Nick Knight’s Showstudio & session work with Sean Ono Lennon, Luke Vibert, Richard Norris, Daphne Guinness & Tony Visconti. Bishi was recently a Tanpura soloist for the City of London Sinfonia, performing Jonny Greenwood’s ‘Water.’

Bishi was lead commissioned artist for Delia Derbyshire Day, who commissioned her to compose a piece of music, celebrating 50 years of White Noise ‘An Electric Storm.’ This resulted in ‘The Telescope Eye,’ an EP she co-produced with Richard Norris. Bishi fronted a documentary for Radio 4 centred around the groundbreaking tech company, ROLI. Her most recent EP ‘Of Rituals & Rites, with composer, Neil Kaczor is out on March 20th 2020, for Spring Equinox.

 

About
Privacy