How an image led to an album
Dom Aversano

Curiosity takes you to new places. I arrived in such a place after contemplating what a highly complex polyrhythm might look like. As an instrumentalist, I am accustomed to placing limits on my thinking based on what is physically possible, but since the digital realm essentially removes most physical constraints, we enter a new world of disembodied possibilities. The following image — created in P5JS — is one such example.

This image depicts 750 polyrhythms juxtaposed one on top of another. The x-axis represents time, and the y-axis is the increasing division of that time. At the very top left of the image, there is a single point. The line beneath has two equidistant points — one at the top left and one at the top centre. The line beneath this has three equidistant points: then four, five, six etc. all the way to 750 divisions. To create this by hand would be painstaking, if not impossible, but coding it is simple.
When I saw the image I was astonished — it contained star-like shapes, mirror symmetry, curved lines at either edge, and a series of interesting patterns across the top. Despite finding the image fascinating, I could not find a use for it, so I shelved it and moved on.
A little while later I decided to share these images on Substack, hoping they might be of interest to someone. To bring the images to life I decided to sonify them, by building a simple custom program in Supercollider. The program quickly morphed into a rabbit hole, as when I tinkered with it I heard new sound worlds awakening. It wasn’t long before I realised I was halfway into creating an album that I had never intended to make.
What captured me about the music was the same as the images: they were humanly impossible. Performing 750 rhythms is completely beyond the capabilities of the human mind, but effortless for a computer. The result was music that was temporally organised, but with no meter or ‘one’ to resolve on. There was a logical flow of patterns, but nothing to tap one’s foot to. Using the harmonic series as a scale allowed vast clusters of tones that the chromatic scale could not accommodate. With this vast number of tones, the distinction between timbre, chords, and notes started to break down.
The idea that computers can unlock forms of artistic expression which lie beyond the capabilities of the human body was described eloquently by the late computer artist Vera Molnar.
Without the aid of a computer, it would not be possible to materialize quite so faithfully an image that previously existed only in the artist’s mind. This may sound paradoxical, but the machine, which is thought to be cold and inhuman, can help to realize what is most subjective, unattainable, and profound in a human being.
Molnar’s proposal that machines can provide access to unattainable realms of artistic expression seemed a strong counter-argument to the romantic notion that machines degrade the subtleties of human expression. Rather than having machines imitate human expression, in Molnar’s interpretation, they could express facets of the human experience that the limits of physicality prevented. The machine rather than deromanticising human expression could be a tool used to express subtle aspects of ourselves.
With this idea in mind, I delved deeper into the visual dimension. One day, it occurred to me that the original polyrhythm image could be visualised circularly. In this case, the rhythms would be represented as rotating divisions in space that could be layered one on top of another. The result was an image distinct from the previous one.

The process for generating this image: Draw one radial line at 0°. Then add two lines equidistant in rotation. Then add three lines equidistant in rotation. Then add four lines, and so on.
The new image looked organic, almost botanical. The big division at the top was matched by half the size at 180° and two more half the size again at 90° and 270°. The dark lines represented points of convergence. The lighter areas are spaces of less density.
I chose the image as the cover for the album since having artwork and music derived from the same algorithm felt satisfying and aesthetically appropriate. Had I not made the initial image I would not have made the music, or at least I would have arrived at it in another way at another time. That this process occurred at all remains a surprise to me, which I treat as a testament to the capacity for curiosity to take us to unknown places.
An interview with Interaction Designer Arthur Carabott Part II
Dom Aversano

This is Part II of an interview with interaction designer Arthur Carabott. In Part I Arthur discussed how after studying music technology at Sussex University he found a job working on the Coca-Cola Beatbox Pavilion in the 2012 Olympic Games. What follows is his description of how the work evolved.
Did you conceive that project in isolation or collaboration?
The idea had already been sold and the architects had won the competition. What was known was there would be something musical because Mark Ronson was going to be making a song. So the idea was to build a giant instrument from a building, which everyone could play by waving their hands over giant pads. They wanted to use sports sounds and turn them into music while having a heartbeat play throughout the building, tying everything together.
Then it came down to me playing with ideas, trying things out, and them liking things or not liking things. We knew that we had five or six athletes and a certain number of interactive points on the building.
So it was like, okay, let’s break it down into sections. We can start with running or with archery or table tennis. That was the broad structure, which helped a lot because we could say we have 40 interactive points, and therefore roughly eight interactions per sport.
Did you feel you were capable of doing this? How would you advise someone in a similar position?
Yeah, I was 25 when this started. While it’s difficult to give career advice, one thing I hold onto is saying yes to things that you’ve never done before but you kind of feel that you could probably do. If someone said we want you to work on a spaceship I’d say that’s probably a bad idea, but this felt like a much bigger version of things that I’d already done.
There were new things I had to learn, especially working at that scale. For instance, making the system run fast enough and building a backup system. I’d never done a backup system. I had just used my laptop in front of my class or for an installation. So I definitely learning things.
If I have any natural talent it’s for being pretty stubborn about solving problems and sticking at it like a dog with a bone. Knowing that I can, if I work hard at this thing, pull it off. That was the feeling.

How did you get in contact with Apple?
I was a resident in the Music Hackspace then and rented a desk in Somerset House. Apple approached Music Hackspace about doing a talk for their Today at Apple series.
I already had a concept for a guerrilla art piece, where the idea was to make a piece of software where I could play music in sync across lots of physical devices. The idea was to go around the Apple store and get a bunch of people to load up this page on as many devices as we could, and then play a big choir piece by treating each device as a voice.

Kind of like a flash mob?
Yeah, sort of. It was inspired by an artist who used to be based in New York called Kyle McDonald, who made a piece called People Staring at Computers. His program would detect faces and then take a photo of them and email it to him. He installed this in the New York Apple stores and got them to send him photos. He ended up being investigated by the Secret Service, who came to his house and took away his computers.
However, for my thing, I wanted to bring a musician into it. Chagall was a very natural choice for the Hackspace. For the music I made an app where people could play with the timbre parameters of a synth, but with a quite playful interface which had faces on it.
How did you end up working with the composer Anna Meredith? You built an app with her, right?
Yes, an augmented reality app. It came about through a conversation with my friend, Marek Bereza, who founded Elf Audio and makes the Koala sampler app. We met up for a coffee and talked about the new AR stuff for iPhones. The SDK had just come to the iPhones and it had this spatial audio component. We were just knocking around ideas of what could be done with it.
I got excited about the fact that it could give people a cheap surround sound system by placing virtual objects in their space. Then you have — for free, or for the cost of an app — a surround sound system.
There was this weekly tea and biscuits event at Somerset House where I saw Anna Meredith and said, ‘Hey, you know, I like your music and I’ve got this idea. Could I show it to you and see what you think?’ So I came to her studio and showed her the prototype and we talked it through. It was good timing because she had her album FIBS in the works. She sent me a few songs and we talked back and forth about what might work for this medium. We settled on the piece Moon Moons, which was going to be one of the singles.
It all came together quite quickly. The objects in it are actual ceramic sculptures that her sister Eleanor made for the album. So I had to teach myself how to do photogrammetry and 3D scan them, before that technology was good on phones.

You moved to LA. What has that been like?
It was the first time I moved to another country without a leaving date. London’s a great city. I could have stayed, and that would have been the default setting, but I felt like I took myself off the default setting.
So, I took a trip to LA to find work and I was trying to pull every connection I could. Finding people I could present work to, knocking on doors, trying to find people to meet. Then I found this company Output and I was like, ‘Oh, they seem like a really good match’. They’re in LA and they have two job openings. They had one software developer job and one product designer job.
I wrote an email and an application to both of these and a cover letter which said: Look, I’m not this job and I’m not that job. I’m somewhere in the middle. Do you want me to be doing your pixel-perfect UI? That’s not me. Do you want me to be writing optimized audio code? That’s not me either. However, here’s a bunch of my work and you can hear all these things that I can do.
I got nothing. Then I asked Jean Baptise from Music Hackspace if he knew any companies. He wrote an email to Output introducing me and I got a meeting.
I showed my work. The interviewer wrote my name on a notebook and underlined it. When I finished the presentation I looked at his notebook and he hadn’t written anything else. I was like, ‘Okay, that’s a very good sign or very bad sign’. But I got the job.
How do you define what you do?
One of the themes of my career is that has been a double-edged sword is it not being specifically one thing. In the recruitment process what they do is say we have a hole in our ship, and we need someone who can plug it. And very rarely are companies in a place where they think, we could take someone on who’s interesting, but we don’t have an explicit problem for them to solve right now, but we think they could benefit what we’re doing.
The good thing is I find myself doing interesting work without fitting neatly into a box that people can understand. My parents have no idea what I do really.
However, I do have a term I like, but it’s very out of fashion, which is interaction designer. What that means is to play around with interaction, almost like behaviour design.
You can’t do it well without having something to play with and test behaviours with. You can try and simulate it in your head, but generally, you’re limited to what you already know. For instance, you can imagine how a button works in your head, but if you imagine what would happen if I were to control this MIDI parameter using magnets, you can’t know what that’s like until you do it.
What are your thoughts on machine learning and AI? How that will affect music technology?
It’s getting good at doing things. I feel like people will still do music and will keep doing music. I go to a chess club and chess had a boom in popularity, especially during the pandemic. In terms of beating the best human player that has been solved for decades now, but people still play because people want to play chess, and they still play professionally. So it hasn’t killed humans wanting to play chess, but it’s definitely changed the game.
There is now a generation who have grown up playing against AIs and it’s changed how they play, and that’s an interesting dynamic. The interesting thing with music is, it has already been devalued. People barely pay anything for recorded music, but people still go to concerts though concert tickets are more expensive than ever people are willing to pay.
I think the thing that people are mostly interested in with music is the connection, the people, the personal aspect of it. Seeing someone play music, seeing someone very good at an instrument or singing is just amazing. It boosts your spirits. You see this in the world of guitar. A new guitarist comes along and does something and everyone goes, ‘Holy shit, why has no one done that before’?
Then you have artists like Squarepusher and Apex Twin who their own patches to cut up their drum breaks. But they’re still using their own aesthetic choice of what they use. I’m not in the camp that if it’s not 100% played by a human on an instrument, then it’s not real music.
The problem with the word creativity is it has the word create in it. So I think a lot of the focus goes on the creation of materials, whereas a lot of creativity is about listening and the framing of what’s good. It’s not just about creating artefacts. The editorial part is an important part of creativity. Part of what someone like Miles Davis did is to hear the future.
An interview with Blockhead creator Chris Penrose
Dom Aversano

Blockhead is an unusual sequencer with an unlikely beginning. In early 2020, as the pandemic struck, Chris Penrose was let go from his job in the graphics industry. After receiving a small settlement package, he combined this with his life savings and used it to develop a music sequencer that operated in a distinctively different manner from anything else available. In October 2023, three years after starting the project, he was working full-time on Blockhead, supporting the project through a Patreon page even though the software was still in alpha mode.
The sequencer has gained a cult following made up of fans as much as users, enthusiastic to approach music-making from a different angle. It is not hard to see why, as in Blockhead everything is easily malleable, interactive, and modulatable. The software works in a cascade-like manner, with automation, instruments, and effects at the top of the sequencer affecting those beneath them. These can be shifted, expanded, and contracted easily.
When I speak to Chris, I encounter someone honest and self-deprecating, all of which I imagine contributes to people’s trust in the project. After all, you don’t find many promotional videos that contain the line ‘Obviously, this is all bullshit’. There is something refreshingly DIY and brave about what he is doing, and I am curious to know more about what motivated him, so arranged to talk with Chris via Zoom to discuss what set him off on this path.
What led you to approach music sequencing from this angle? There must be some quite specific thinking behind it.
I always had this feeling that if you have a canvas and you’re painting, there’s an almost direct cognitive connection between whatever you intend in your mind for this piece of art and the actual actions that you’re performing. You can imagine a line going from the top right to the bottom left of the canvas and there is a connection between this action that you’re taking with a paintbrush pressing against the canvas, moving from top right down to left.
Do you think that your time in the graphics industry helped shape your thinking on music?
When it comes to taking the idea of painting on a canvas and bringing it into the digital world, I think programs like Photoshop have fared very well in maintaining that cognitive mapping between what’s going on in your mind and what’s happening in front of you in the user interface. It’s a pretty close mapping between what’s going on physically with painting on a canvas and what’s going on with the computer screen, keyboard and mouse.
How do you see this compared to audio software?
It doesn’t feel like anything similar is possible in the world of audio. With painting, you can represent the canvas with this two-dimensional grid of pixels that you’re manipulating. With audio, it’s more abstract, as it’s essentially a timeline from one point to another, and how that is represented on the screen never really maps with the mind. Blockhead is an attempt to get a little closer to the kind of cognitive mapping between computer and mind, which I don’t think has ever really existed in audio programs.
Do you think other people feel similarly to you? There’s a lot of enthusiasm for what you doing, which suggests you tapped into something that might have been felt by others.
I have a suspicion that people think about audio and sound in quite different ways. For many the way that digital audio software currently works is very close to the way that they think about sound, and that’s why it works so well for them. They would look at Blockhead and think, well, what’s the point? But I have a suspicion that there’s a whole other group of people who think about audio in a slightly different way and maybe don’t even realise as there has never been a piece of software that represents things this way.
What would you like to achieve with Blockhead? When would you consider it complete?
Part of the reason for Blockhead is completely selfish. I want to make music again but I don’t want to make electronic music because it pains me to use the existing software as I’ve lost patience with it. So I decided to make a piece of audio software that worked the way I wanted it. I don’t want to use Blockhead to make music right now because it’s not done and whenever I try to make music with Blockhead, I’m just like, no, this is not done. My brain fills with reasons why I need to be working on Blockhead rather than working with Blockhead. So the point of Blockhead is just for me to make music again.
Can you describe your approach to music?
The kind of music that I make tends to vary from the start. I rarely make music that is just layers of things. I like adding little moments in the middle of these pieces that are one-off moments. For instance, a half-second filter sweep in one part of the track. To do that in a traditional DAW, you need to add a filter plugin to the track. Then that filter plugin exists for the entire duration of the track, even if you’re just using it for one moment. It’s silly that it has to exist in bypass mode or 0% wet for the entire track, except in this little part where I want it. The same is true of synthesizers. Sometimes I want to write just one note from a synthesizer at one point in time in the track.
Is it possible for you to complete the software yourself?
At the current rate, it’s literally never going to be finished. The original goal with Patreon was to make enough money to pay rent and food. Now I’m in an awkward position where I’m no longer worrying about paying rent, but it’s nowhere near the point of hiring a second developer. So I guess my second goal with funding would be to make enough money to hire a second person. I think one extra developer on the project would make a huge difference.
It is hard not to admire what Chris is doing. It is a giant project, and to have reached the stage that it has with only one person working on it is impressive. Whether the project continues to grow, and whether he can hire other people remains to be seen, but it is a testament to the importance of imagination in software design. What is perhaps most attractive of all, is how it is one person’s clear and undiluted vision of what this software should be, which has resonated with so many people across the world.
If you would like to find out more about the Blockhead or support the project you can visit its Patreon Page.
Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.
Steve Reich’s exploration of technology through music
Dom Aversano

New York composer Steve Reich did not just participate in the creation of a new style of classical music, he helped establish a new kind of composer. Previously, the word composer evoked an archetype of a quill-wielding child prodigy who had composed several symphonies before adulthood — finding perhaps its purest embodiment in the example of Amadeus Mozart — whereas Reich represented a composer who gradually and determinedly developed their talent in a more relatable manner. At the same age that Mozart was on his deathbed composing his Requiem, Reich was struggling to establish himself in New York, driving taxis to make ends meet.
A key source of Reich’s inspiration was atypical of the classical music tradition, in which composers tended to draw inspiration from nature, religion, romantic love, classical literature, and other art forms; by contrast, Reich’s career was ignited by ideas he derived from electronic machines.
In what is now musical folklore, the young composer set up two tape recorders in his home studio with identical recordings of the Pentecostal preacher Brother Walter proclaiming ‘It’s gonna rain’. Reich pressed play on both machines and to his astonishment found the loops were perfectly synchronised. That initial synchronisation then began to drift as one machine played slightly faster than the other, causing the loops to gradually move out of time, thereby giving rise to a panoply of fascinating acoustic and melodic effects that would be impossible to anticipate or imagine without the use of a machine. The experiment formed the basis for Reich’s famous composition It’s Gonna Rain and established the technique of phasing (I have written a short guide to Reich’s three forms of phasing beneath this article).
While most composers would have considered this a curious home experiment and moved on, Reich, ever the visionary, sensed something deeper that formed the basis for an intense period of musical experimentation lasting almost a decade. In a video explaining the creation of the composition, It’s Gonna Rain, he describes the statistical improbability of the two tape loops having been aligned.
And miraculously, you could say by chance, you could say by divine gift, I would say the latter, but you know I’m not going to argue about that, the sound was exactly in the centre of my head. They were exactly lined up.
To the best of my knowledge, it is the first time in classical music that someone attributed intense or divine musical inspiration to an interaction with an electronic machine. How one interprets the claim of divinity is irrelevant, the significant point is it demonstrates the influence of machines on modern music not simply as a tool, but as a fountain of ideas and profound inspiration.
In a 1970 interview with fellow composer Michael Nyman, Reich described his attitude and approach to the influence of machines on music.
People imitating machines was always considered a sickly trip; I don’t feel that way at all, emotionally (…) the kind of attention that kind of mechanical playing asks for is something we could do with more of, and the “human expressive quality” that is assumed to be innately human is what we could do with less of now.
While phasing became Reich’s signature technique, his philosophy was summed up in a short and fragmentary essay called Music as a Gradual Process. It contained insights into how he perceived his music as a deterministic process, revealed slowly and wholly to the listener.
I don’t know any secrets of structure that you can’t hear. We all listen to the process together since it’s quite audible, and one of the reasons it’s quite audible is because it’s happening extremely gradually.
Despite the clear influence of technology on Reich’s work, there also exists an intense criticism of technology that clearly distinguishes his thinking from any kind of technological utopianism. For instance, Reich has consistently been dismissive of electronic sounds and made the following prediction in 1970.
Electronic music as such will gradually die and be absorbed into the ongoing music of people singing and playing instruments.
His disinterest in electronic sounds remains to this day, and with the exception of the early work Pulse Music (1969), he has never used electronically synthesised sounds. However, this should not be confused with a sweeping rejection of modern technology or a purist attitude towards traditional instruments. Far from it.
Reich was an early adopter of audio samplers, using them to inset short snippets of speech and sounds into his music from the 1980s onwards. A clear demonstration of this can be found in his celebrated work Different Trains (1988). The composition documents the long train journeys Reich took between New York and Los Angeles from 1938 to 1941 when travelling between his divorced parents. He then harrowingly juxtaposed this with the train journeys happening at the same time in Europe, where Jews were being transported to death camps.
For the composition, Reich recorded samples of his governess who accompanied him on these journeys, a retired pullman porter who worked on the same train line, and three holocaust survivors. He transcribed their natural voice melodies and used them to derive melodic material for the string quartet that accompanies the sampled voices. This technique employs technology to draw attention to minute details of the human voice, that are easily missed without this fragmentary and repetitive treatment. As with Reich’s early composition, It’s Gonna Rain, it is a use of technology that emphasises and magnifies the humanity in music, rather than seeking to replace it.
Having trains act as a source of musical and thematic inspiration demonstrates, once again, Reich’s willingness to be inspired by machines, though he was by no means alone in this specific regard. There is a rich 20th-century musical tradition of compositions inspired by trains, including works such as jazz composer Duke Ellington’s Take the A Train, Brazilian composer Heitor Villa Lobos’s The Little Train of the Caipira, and the Finnish composer Kaija Saariaho’s Stilleben.
Reich’s interrogation of technology finally reaches its zenith in his large-scale work Three Tales — an audio-film collaboration with visual artist Beryl Korot. It examines three technologically significant moments of the 20th century: The Hindenburg disaster, the atom bomb testing at Bikini, and the cloning of Dolly the sheep. In Reich’s words, they concern ‘the physical, ethical, and religious nature of the expanding technological environment.’ As with Different Trains, Reich recorded audio samples of speech to help compose the music, this time using the voices of scientists and technologists such as Richard Dawkins, Jaron Lanier, and Marvin Minsky.
These later works have an ominous, somewhat apocalyptic feel, hinting at the possibility of a dehumanised and violent future, yet while maintaining a sense of the beauty and affection humanity contains. Throughout his career, Reich has used technology as both a source of inspiration and a tool for creation in a complicated relationship that is irreducible to sweeping terms like optimistic or pessimistic. Instead, Reich uses music to reflect upon some of the fundamental questions of our age, challenging us to ask ourselves what it means to be human in a hi-tech world.
A short guide to three phasing techniques Reich uses
There are three phasing techniques that I detect in Steve Reich’s early music which I will briefly outline.
First is a continuous form of phasing. A clear demonstration of this is in the composition It’s Gonna Rain (1965). With this phasing technique, the phase relationship between the two voices is not measurable in normal musical terms (e.g., ‘16th notes apart’ etc) but exists in a state of continuous change making it difficult to measure at any moment. An additional example of this technique can be heard in the composition Pendulum Music.
The second is a discrete form of phasing. A clear demonstration of this is the composition Clapping Music (1972). With this phasing technique, musicians jump from one exact phase position to another without any intermediary steps, making the move discrete rather than gradual. Since the piece is in a time cycle of 12 there are the same number of possible permutations, each of which is explored in the composition, thereby completing the full phase cycle.
The third is a combination of continuous and discrete phasing. A clear demonstration of this is Piano Phase (1967). With this phasing technique, musicians shift gradually from one position to another, settling in the new position for some time. In Piano Phase one musician plays slightly faster than the other until they reach their new phase position which they settle into for some time before making another gradual shift to another phase position. An additional example of this technique can be heard in the composition Drumming.
Music Hackspace is running an online workshop Making Generative Phase Music with Max/MSP Wednesday January 17th 17:00 GMT
Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.
Music Hackspace Christmas Quiz
Dom Aversano

History
- Which 19th-century mathematician predicted computer-generated music?
- What early electronic instrument did Oliver Messiaen use in his composition Trois petites liturgies de la Présence Divine?
- Who invented FM synthesis?
- What was the first name of the French mathematician and physicist who invented Fourier’s analysis?
- Oramics was a form of synthesis invented by which British composer?
Synthesis
- What is the name given to an acoustically pure tone?
- What music programming language was named after a particle accelerator?
- What synthesiser did The Beatles use on their 1969 album Abbey Road?
- What microtonal keyboard is played in a band scene in La La Land?
- What are the two operators called in FM synthesis?
Music
- What was the name of the Breakbeat that helped define jungle/drum and bass?
- IRCAM is based in which city?
- Hip hop came from which New York Neighbourhood?
- Which genre-defining electronic music record label originated in Sheffield?
- Sonor Festival happens in which city?
General
- Who wrote the book Microsound?
- Who wrote the composition Kontakte?
- How many movements is John Cage’s 4’33”?
- Who wrote the book On the Sensation of Tone?
- Which composer wrote the radiophonic work Stilleben?
Scroll down for the answers!
Answers
History
- Ida Lovelace
- Ondes Martenot
- John Chowning
- Joseph
- Daphne Oram
Synthesis
- Sine wave
- Supercollider
- The Moog
- Seaboard
- Carrier and Modulator
Music
- Amen Brother
- Paris
- The Bronx
- Warp Records
- Sonar
General
- Curtis Roads
- Karlheinz Stockhausen
- Three
- Hermann von Helmholtz
- Kaija Saariaho
Is music writing still relevant?
Dom Aversano

I recently listened to a podcast series by Sean Adams from Drowned in Sound which discusses the decline of music journalism as a profession (not to be conflated with music writing as a whole). It caused me to reflect on why I consider music writing valuable and important, even in an age where anyone can easily publish their thoughts. Why do the stories of music matter, and what would happen if they dissolved into digital chatter?
There’s a quote that is often wheeled to demonstrate the apparent futility of writing about music — one I found objectionable long before I ever considered music writing.
Writing about music is like dancing about architecture
This is attributed to all sorts of people: Frank Zappa, Laurie Anderson, and Elvis Costello. Probably none of them said it, and in the end, it doesn’t matter. Get a group of musicians together and they will talk about music for hours — so if talking is permitted, why is writing not? Both articulate thought, and as an experienced writer once told me, writing is just thinking clearly.
History is full of composers who wrote. Aaron Copland was a prolific writer, as was Arnold Schoenberg. Before them you had 19th-century composers writing essays on music in a similar way to how 21st-century musicians use social media. Some infamously, such as the master of self-promotion Richard Wagner, who filled an entire book with anti-Semitic bile.
There is no lack of writing in contemporary music culture either. Composers such as John Adams, Philip Glass, Errollyn Wallen and Gavin Bryars have all written autobiographies. Steve Reich recently published Conversations, a book that transcribes his conversations with various collaborators. In South India, the virtuoso singer and political activist T M Krishna is a prolific writer of books and articles on musicology and politics.
Given that music writing has a long and important history, the question that remains is: does it have contemporary relevance, or could the same insights be crowdsourced from the vast amount of information online? In short, do professional opinions on music still matter?
Unsurprisingly, I believe yes.
I do not believe that professional opinion should be reserved only for science, politics, and economics, but should apply to music and the arts too, and if we are truly no longer willing to fund artistic writing, what does this say about ourselves and our culture? Is music not a serious part of human existence?
Even if musicians at times feel antagonised by professional critics, they ultimately benefit from having experts document and analyse their art. This is not to suggest professionals cannot get it wrong; they most certainly can, as exemplified by this famous example where jazz criticism went seriously awry.
In the Nov. 23, 1961, DownBeat, Tynan wrote, “At Hollywood’s Renaissance Club recently, I listened to a horrifying demonstration of what appears to be a growing anti-jazz trend exemplified by these foremost proponents [Coltrane and Dolphy] of what is termed avant-garde music.
“I heard a good rhythm section… go to waste behind the nihilistic exercises of the two horns.… Coltrane and Dolphy seem intent on deliberately destroying [swing].… They seem bent on pursuing an anarchistic course in their music that can but be termed anti-jazz.”
Despite this commentary being way off the mark, it also acts as a historical record for how far ahead of the critics John Coltrane and Eric Dolphy were. Had the critics not documented their opinion, we would not know how this music — which sounds relatively tame by today’s standards — was initially received by some as ‘nihilistic’ and ‘anarchistic’. It is easy to point out the failure of the critics, but it also highlights how advanced Coltrane and Dolphy were.
Conversely, an example where music writing resonated with the Zeitgeist was Alex Ross’s book The Rest is Noise. This concise, entertaining history of 20th-century classical music was so influential it formed the curation for a year-long festival of music in London’s Southbank Centre. The event changed the artistic landscape of the city by making contemporary classical music accessible and intelligible while demonstrating it could sell out big concert halls. In essence, Ross did what composers had largely failed to do in the 20th century — he brought the public up-to-date and provided a coherent narrative for a century that felt confusing to many.
The peril of leaving this to social media was demonstrated by this year’s biggest-grossing film, Barbie. For the London press preview, social media influencers were given preference over film critics and told, ‘Feel free to share your positive feelings about the film on Twitter after the screening.’ I expected to find the film challenging and provocative but encountered something that felt bland, obvious, and devoid of nuance. I potentially got caught up in a wave of hype that used unskilled influencers and sidelined professional critics.
The world is undoubtedly changing at a rapid pace, and music writing must keep up with it. Some of what has disappeared, I do not miss, such as the ‘build them up to tear them down’ attitude of certain music journalism during the print era. Neither do I miss journalists being the gatekeepers of culture. For all the Internet’s faults, the fact that anyone can publish their work online and develop an audience without the need for an intermediary remains a marvel of the modern era.
However, as with all revolutions, there is a danger of being overzealous about the new at the expense of the old. Music is often referred to metaphorically as an ecosystem, yet given that we are a part of nature, surely it is an accurate description. Rip out large chunks of that ecosystem and the consequences may be that everything within it suffers.
For this reason, far from believing that writing about music is like dancing about architecture, I consider it a valuable way to make sense of and celebrate a beautiful art form. If that writing disappears, we will all be poorer for it.
So in the spirit of supporting contemporary music writers here is a non-exhaustive list of some writers whom I have benefitted from reading.
An authority on contemporary classical music and authour of The Rest is Noise.
Philip Sherborne / Pitchfork & Substack
Experienced journalist specialising in experimental electronic music.
A classical musical expert who analyses music through a feminist perspective. The authour of Quartet.
Outspoken takes on popular culture and music from an ex-jazz pianist. Authour of multiple books.
Scottish classical music critique who writes about subjects such as the Ethiopian nun/pianist/composer Emahoy Tsegué-Maryam Guèbrou.
One of the finest Carnatic music singers of his generation, a mountain climber, and a polemical left-wing voice in Indian culture.
Can music help foster a more peaceful world?
Dom Aversano

Like many, in recent weeks I have looked on in horror at the war in the Middle East and wondered how such hatred and division is possible. Not simply from people directly involved in the war, but also from the entrenched and polarised discourse on social media from across the world. Don’t worry, I’m not about to give you another underinformed political opinion, but rather, I would like to explore the idea of whether music can help foster peace in the world, and help break down the polarisation and division fracturing our societies.
In 2017, when it was clear that polarisation and authoritarianism were on the rise, I bought myself a copy of the Yale historian Timothy Snyder’s book On Tyranny: Twenty Lessons from the Twentieth Century. Written as a listicle it is full of practical advice on living through strange political times and how to attempt to influence them for the better, with chapter titles such as ‘Defend institutions’, ‘Be kind to our language’, and ‘Make eye contact and small talk’.
What I found missing in the book was a robust call to defend the arts, despite this being one of the first things any would-be authoritarian might attack. I questioned, what is it about the arts that makes authoritarians feel instinctively threatened?
What follows are five reflections on why I think music is powerful in the face of inhumanity, and how we can use it to foster peace.
Music ignites the imagination
Whether by creating or listening to it, music ignites and awakens the imagination. Art allows us to envision other worlds. The composer Franz Schubert expressed this idea when praising Mozart in a diary entry he made on June 13th, 1816.
O Mozart, immortal Mozart, what countless images of a brighter and better world thou hast stamped upon our souls!
Conversely, without artists our collective imagination shrinks, priming people for conformity and fixations on a lost romantic past or grand nationalist future. This is not to say that art completely disappears, but that it becomes an empty vessel for state propaganda. Whereas liberating music allows us to imagine new realities.
Music offers society multiple leaders.
It is a cliché to write about Taylor Swift, but there is no denying she is influential. People filling out arenas to listen to Taylor Swift deflect attention from mesmeric demagogues like Donald Trump. It is an influence that cannot summon an army or change tax laws, but is powerful nevertheless. The singer has said she will campaign against America’s aspiring dictator in the coming US election. Perhaps having a billionaire singer telling people how to vote will do more harm than good, but what is certain is that she will be influential at a pivotal moment in history.
One does not need such dazzling fame to be significant. I count myself lucky enough to have been friends with the late electronic composer Mira Calix, who was also a passionate campaigner against Brexit and nationalism. At the last concert of her classical music, she used this moment on the stage to give a short but heartfelt defence of free movement. It was powerful, even if it went unreported.
While this type of power might seem intangible or questionable, it is more obvious when observed through the lens of history. In the 1960s-70s musicians’ protests against the Vietnam War and Cold War can plausibly said to have helped hasten the dismantlement of these conflicts, as they drew attention to the destructiveness and absurdity of the conflicts, while offering alternative visions for the future. In the immortal lyrics from Sun Ra’s Nuclear War, ‘If they push that button, your ass gotta go’. It’s hard to argue with that.
Music is uniting
There are exceptions to this, but music more generally unites than divides. Audiences are comprised of people who might otherwise be divided by politics, class, or religious/non-religious affiliations. Music can bypass belief and connect us to something deeper, that is common to all of us.
Unity applies to musicians too. Artists like Miles Davis, Duke Ellington, and Frank Zappa were not necessarily the best instrumentalists of their generation, but they formed the world’s best groups by picking the finest talent of their age. Without sophisticated collaboration, they would not have been capable of achieving everything they did. Their styles of bandleading may have ranged from the conventional to the eccentric, and they were by no means saints or role models, but they held groups together that demonstrated the creative power of collaboration.
Finally, unity can stretch across borders. Music allows one to appreciate the skill and expressivity of someone from a completely different culture and background, while gaining some insight into the way they experience the world. Having someone stir our emotion who is seemingly quite different to us acts as a reminder of their humanity, especially in cases where they have dehumanised or degraded. Under Narendra Modi’s rule of India, a strong anti-Muslim sentiment has spread, yet India’s finest tabla player is Zakir Hussain, a Muslim. Every time he plays he reminds people that beauty and dignity exist within all people.
Music makes you less rigid
Music rejects rigid ideologies. Simplistic and reductive models of music create sound worlds that are dull and predictable. To listen to or create music effectively one needs to be relaxed, flexible, and open to allowing in new forms of music, whether it is from a different region, style, or period of history. By doing so one’s internal world is enriched.
Purists are in contrast to this. Whether in classical music, jazz, or minimal techno, it represents a strict and exclusive mentality. To all but themselves — or a certain in-group — their position seem absurd, representing not a love of music but a love of one type of music, and if that music did not exist, what remains?
Music connects us with our emotions
While there may be many complex reasons why we listen to and create music, a simple one is to awaken and express our feelings. Healthy emotions like compassion, hope, or love, need to be felt to be genuine. If our emotional world shuts down, no level of societal status, wealth, or physical health will make us content.
A healthy music culture helps prevent cultural atmospheres dominated by fear and anger, where it becomes easier to divide people and whip up mobs. A lot is made of the importance of intellectual freedom, but it is equally important to be emotionally free. The hate, anger, and recriminations that have spread from the war in the Middle East could be tempered if people took some time to listen to or create music, by connects us to deeper emotions and creates a calm and peace that helps prevent us from fanning the flames of war.
For these reasons, I believe music can help foster a more peaceful world.
Should fair use allow AI to be trained on copyrighted music?
Dom Aversano

This week the composer Ed Newton-Rex brought the ethics of AI into focus when he resigned from his role in the Audio team at Stability AI, citing a disagreement with the fair use argument used by his ex-employer to justify training its generative AI models on copyrighted works.
In a statement posted on Twitter/X he explained the reasons for his resignation.
For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment. This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with.
I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is “the effect of the use upon the potential market for or value of the copyrighted work”. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use.
As Newton-Rex states, this is quite a standard argument made by companies using copyright material to train their AI. In fact, Stability AI recently submitted a 23-page document to the US Copyright Office arguing their case. Within it, they state they have trained their Stable Audio model on ‘800,000 recordings and corresponding songs’ going on to state.
These models analyze vast datasets to understand the relationships between words, concepts, and visual, textual or musical features ~ much like a student visiting a library or an art gallery. Models can then apply this knowledge to help a user produce new content, This learning process is known as training.
This highly anthropomorphised argument is at least very questionable. AI models are not like students for obvious reasons: they do not have a body, do not have emotions, and have no life experience. Furthermore, as Stability AI’s own document testifies, they do not learn in the same way that humans learn; if a student were to study 800,000 pieces of music over a ten-year period that would require analysing 219 different songs a day.
The contrast in how humans learn and think was highlighted by the American linguist and cognitive scientist Noam Chomsky in his critique of Large Language Models (LLMs).
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
A lot of this issue is further complicated by the language emerging from the AI community, which varies from anthropomorphic (‘co-pilot’) to deistic (‘godlike’) to apocalyptic (‘breakout scenarios’). Specifically with Stability AI, the company awkwardly evokes Abraham Lincon’s Gettysburg Address when writing on their website that they are creating ‘AI by the people for the people’ with the ambition of ‘building the foundation to activate humanity’s potential’.
While of course, they are materially different circumstances there is nevertheless a certain echo here of the civilising mission used to morally rationalise the economic rapaciousness of empire. To justify the permissionless use of copyrighted artwork on the basis of a mission to ‘activate humanity’s potential’ in a project ‘for the people’ is excessively moralistic and unconvincing. If Stability AI wants their project to be ‘by the people’ they should have artists explicitly opt-in before using their work, but the problem with this is that many will not, rendering the models perhaps not useless, but greatly less effective.
This point was underscored by venture capital fund Andreessen Horowitz who recently released a rather candid statement to this effect.
The bottom line is this: imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development.
Although in principle supportive of generative AI Newton-Rex does not ignore the economic realities behind the development of AI. In a statement that I will finish with, he succinctly and eloquently brings into focus the power imbalance at play and its potential destructiveness
Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright.
If you have an opinion you would like to share on this topic please feel free to comment below.
Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.
In defence of the Iklectik Art Lab
Dom Aversano

There are moments when you know you’ve really travelled down an avant-garde rabbit hole. I experienced this when watching the essay documentary on Soviet Era synthesisers Elektro Moskva, at the Iklectik Art Lab in Lambeth South London. At one point in the film, a synth is played that divides the octave into something like 70 tones. By most definitions, it was not a pleasant sound, and it wasn’t just me who thought so. Tony, the resident cat, had enough, and let out a prolonged howl that drowned out the sound of the synth and turned everyone in the audience’s heads around to focus their attention on this alpha feline, in what felt like a clear admonishment from the animal kingdom; having conquered the world, did we not have something better to do than listen to the sound of a (frankly crap) synth droning away in the crumbling remains of a communist dystopia?
Well, Tony, sorry to disappoint you, but no.
The value of Iklectik to London’s music scene is hard to quantify, as it has made space for many artistic activities that might otherwise be filtered out, and not least of all, the music hacking scene. The acoustic music hacking group Hackoustic has put on regular events in the appropriately named Old Paradise Yard for about 8 years. In no small part, this is because Eduard Solaz and Isa Barzizza have always been gracious hosts, willing to sit down with artists and treat them with respect and fairness. Unfortunately, it appears that this has not been reciprocated by the owners of the land, who are now warning of imminent eviction and wish to transform the land into the kind of homogenous office space that turns metropolises into overpriced, hollowed-out, dull places.
I spoke to the founder of Iklectik, Eduard Solaz, who had the following to say.
Why are you being evicted from Old Paradise Yard and when are you expected to leave?
This decision came quickly after the Save Waterloo Paradise campaign mobilised nearly 50,000 supporters and persuaded Michael Gove to halt the development project, something we have been campaigning for over this last year. Our public stance against the controversial plans has resulted in this punitive action against IKLECTIK and the other 20 small businesses here at Old Paradise Yard. Currently, despite not yet having permission for the full redevelopment, Guy’s and St Thomas’ Foundation are refusing to extend Eat Work Art’s (the site leaseholder) lease.
What impact will this development have on the arts and the environment?
For more than nine years, we, along with musicians, artists, and audiences, have collaboratively cultivated a unique space where individuals can freely explore and showcase groundbreaking music and art while experiencing the forefront of experimental creativity. London needs, now more than ever, to safeguard grassroots culture.
From an environmental perspective, this development is substantial and is expected to lead to a significant CO2 emissions footprint. Consequently, it poses a potential threat to Archbishop’s Park, a Site of Importance for Nature Conservation that serves as a vital green space for Lambeth residents and is home to a diverse range of wildlife. It also puts Westminster’s status as a Unesco World Heritage sight at risk.
Do you see hope in avoiding the eviction, and if so, what can people do to prevent it?
There is hope. In my opinion, the GSTT Foundation, operating as a charitable organisation, should reconsider its decision and put an end to this unjust and distressing situation. We encourage all of our supporters to reach out to the foundation and advocate for an end to this unfair eviction.
Here you can find more information to help us: https://www.iklectik.org/saveiklectik

To get a sense of what this means for London’s music hacking community I also spoke to Tom Fox, a lead organiser for Hackoustic, who put on regular nights at Iklectik.
Can you describe why Iklectik is significant to you and the London arts scene?
Iklectik is one of London’s hidden gems, and as arts venues all over the UK are dying out, it has been a really important space for people like us to be able to showcase our work. We’ve had the privilege of hosting well over 100 artists in this space through the Hackoustic Presents nights and it helped us, and others, find their tribe. We’ve made so many friends, met their families, met their kids, found like-minded people and collaborated on projects together. We’ve had people sit in the audience, and get inspired by artists who then went on to make their own projects and then present with us. Some of our artists have met their life partners at our events! The venue isn’t just a place to watch things and go home, they’re meeting places, networking places, social gatherings and a place to get inspired. I doubt all of these things would have happened if Iklectik weren’t such a special place, run by such special people.
Do you think there is a possibility that Michael Gove might listen?
It’s the hope that gets you! I’m a big believer in hope. It’s a very powerful thing. I don’t have much hope in Michael Gove, however. Or the current government in general. But, you know, there’s always hope.
To take action.
Undoubtedly, Iklectik is up against a bigger opponent, but it is not a foregone conclusion, especially since Michael Gove has halted the development. There is a genuine opportunity for Old Paradise Yard to stay put.
Here is what you can do to help…
On Iklectik’s website, there are four actions that can be taken to help try to prevent the eviction. In particular write to Michael Gove and write to the GSTT Foundation.
For those in the UK, you can attend Hackoustic’s event this Saturday 11th November.
Having collaborated with the Iklectik Art Lab, we at Music Hackspace would like to wish Eduard Solaz, Isa Barzizza, and all the other artists and people who work at Old Paradise Yard the best in their struggle to remain situated there.
Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.
Live coding – an interview with Lizzie Wilson (Digital Selves)
Dom Aversano

The live coding scene has been growing over the years. Despite this, for many the idea of watching someone create music in code might not have an immediate appeal, though live coders are now playing at top night clubs, experimental music venues, and festivals. As the world becomes more code literate it is likely to become more popular.
Curious to know more about these digital improvisers I sat down for a chat with Lizzie Wilson (Digital Selves) who is a leading musician in the field who learned the art of music coding in the lively scene in Leeds, but now lives in London.
Did you have a background in either music or programming before you got into live coding?
I grew up with traditional instruments playing piano and guitar, though I always found I had a bit of trouble with coordination. I also found it limiting to be tied down to expressing ideas musically through physicality. I always did that on the side and really enjoyed it. Then I studied mathematics at university. So not really coding, but obviously it underpins a lot of the ideas in coding.
I didn’t start coding until I found out about Algorave and live coding. It was through one of my good friends, Joanne Armitage. My friends would run little house parties and she would rock up with a laptop and start doing live coding, and I remember seeing it and thinking, Oh, that’s really cool, I’d love to do a bit of this.
Which city was this?
At the time I was based in Leeds, Yorkshire, because that’s where many people were based, and there was a lot happening in the city. This was around 2015/2016.
I didn’t know much about coding or how to code. So I started to learn a bit and pick stuff up, and it felt really intuitive and fast to learn. So it was a really exciting experience for me.
It’s quite rare to find coding intuitive or easy to learn.
Yeah, I had tried a few more traditional ways. I bought a MIDI keyboard and Ableton. While I really enjoyed that, there was something about live coding that made me spend a whole weekend not talking to anyone and just getting really into it. I think that’s, as you say, quite rare, but it’s exciting when it happens.
That’s great. Were you using Tidal Cycles?
Yeah, it was Tidal Cycles. So Joanne was using Supercollider, which is, you know, a really big program. When I first started I wanted to use Supercollider because that was all I knew about. So I tried to learn Supercollider, but there were a lot of audio concepts that I didn’t know about at that time and it was very coding intensive. It was quite a lot for someone who didn’t know much about either at the time, so I never really got into Supercollider.
Then I went to an algorave in Leeds and I saw Alex McLean performing using Tidal Cycles. I remember that performance really well. The weekend after I thought, you know, I’m going to download this and try it out. At that time Alex — who wrote the software — was running a lot of workshops and informal meetups in the area. So there was a chance to meet up with other people who were interested in it as well.

Was this a big thing in Leeds at that time?
Yes, definitely around Yorkshire. I’m sure there were people in London in the late 2000s that were starting off. In the early 2010s, there were a lot of people working, because people were employed by universities in Yorkshire, and it’s got this kind of academic adjacent vibe, with people organising conferences around live coding.
There was a lot happening in Yorkshire around that time, and there still is. Sheffield now tends to be the big place where things are based, but we’re starting to create communities down in London as well and across the UK. So yes, I think Yorkshire is definitely the informal home of it.
I’m curious about what you said earlier about the limitations of physicality. To invert that — what do you consider the liberating ideas that drew you to code and made it feel natural for you?
I think it being so tightly expressed in language. I like to write a lot anyway, so that makes it very intuitive for someone like me. I like to think through words. So I can type out exactly what I want a kick drum sample to do: play two times as fast, or four times as fast. Using words to make connections between what you want the sounds to do is what drew me to it, and I think working this way allows you to achieve a level of complexity quite quickly.
With a traditional digital audio workstation, you have to copy and paste a lot and decide, for instance, on the fourth time round I want to change this bar, and then you have to zoom into where the fourth set of notes is. There’s a lot of copy and pasting and manual editing. I found being able to express an idea in a really conjunct and satisfying way in code exciting. It allows you to achieve a level of complexity quite quickly that produces interesting music.
The live aspect of performance places you within a deeper tradition of improvisation, however, code is more frequently associated with meticulous engineering and planning. How does improvisation work with code?
I think that’s something really interesting. If you think about coding in general, it tends to be, say, you want to make a product, so you go away and you write some code and it does something. This way of coding is very different because you do something, you try it out, and then you ask, does this work? Yes or no. That’s kind of cool, and you see the process happening in real-time, rather than it just being a piece of code that is run and then produces a thing.

Part of the thrill of improvisation comes from the risk of making mistakes publicly, which makes it exciting for the audience and for the artists. How do you feel about improvising?
At first, I always found it quite scary, whereas now I find it enjoyable. That is not to say I am completely fine now, but you get through this process of learning to accept the error or learning to go where it takes you. So yeah, I find the level of unpredictability and never knowing what’s going to happen a really interesting part of it.
How much of an idea do you have of what you’re going to do before performing?
There are people who are a bit more purist and start completely from scratch. They do this thing called blank slate coding, where they have a completely blank screen and then over the performance they build it up. The more time you spend learning the language, the more you feel confident at accessing different ideas or concepts quicker, but I like to have a few ideas and then improvise around them. When I start performing I have some things written on the screen and then I can work with them.
It’s not like one way is more righteous than the other, and people are quite accepting of that. You don’t have to start completely from scratch to be considered coding, but there are different levels of blindness and improvisation that people focus on.
It seems like there are more women involved in live coding than in traditional electronic scenes. Is that your experience?
Yes, and there has been a conscious effort to do that. It’s been the work of a lot of other women before me who’ve tried hard to make sure that if we’re putting on a gig there are women involved in the lineup. This also raises questions like, how do we educate other women? How do we get them to feel comfortable? With women specifically, the idea of failure and of making mistakes can be difficult. There is some documentation on this, for instance, a paper by Dr Joanna Armitage, Spaces to Fail In, that I think is really interesting and can help with how to explore this domain.
It’s not just women though. I think there are other areas that we could improve on. Live coding is not a utopia, but I think people are trying to make it as open a space as possible. I think this reflects some of the other ideas of open-source software, like freedom and sharing.
What other live coders inspire you?
I would say, if I’m playing around the UK, I would always watch out for sets from +777000 (with Nunez on visuals), Michael-Jon Mizra, yaxu, heavy lifting, dundas, Alo Alik, eye measure, tyger blue plus visualists hellocatfood, Joana Chicau and Ulysses Popple, mahalia h-r
More internationally, I really like the work of Renick Bell, spednar, {arsonist}, lil data, nesso, hogobogobogo & gibby-dj
If someone goes to an algorave what can they expect? Is the audience mostly participants, or is there an audience for people who don’t code?
I think you always get a mixture of both. There are some people who are more interested in reading and understanding the code. Often they forget to dance because they’re just standing there and thinking, but there is dancing. There should be dancing! I feel like, if you’re making dance music, it’s nice when people actually dance to it!
It depends on the person as well. There are people who are a lot more experimental and make harsh noise that pushes the limits of what is danceable. Then there are people who like to make music that is very danceable, beat-driven, and arranged. If you go to an Algorave you wouldn’t expect to have one end of the spectrum or other, you will probably get a bit of both.
Over the past few years, we’ve done quite a few shows in London at Corsica Studios, which is a very traditional nightclub space, with a large dark room and a big sound system, as well as more experimental art venues like Iklectik, which is also in London. Then there’s the other end of the spectrum where people do things in a more academic setting. So it’s spread out through quite a lot of places.
My personal favourite is playing in clubs where people actually dance, because I think that’s more fun and exciting than say art galleries, where it’s always a bit sterile. It’s not as fun as being in a place where the space really invites you to let go a little bit and dance. That’s the nice thing about playing in clubs.
Bandcamp recently got bought by Songtradr who then proceeded to lay off 50% of the staff. Traditionally Bandcamp has been seen as an Oasis for independent recording musicians, amid what otherwise are generally considered a series of bad options. Do you have any thoughts on this, especially given that you have released music on Bandcamp?
When I’ve done releases before we haven’t released with Spotify. I’ve only done releases through Bandcamp because as you say, it felt like this safe space for artists, or an Oasis. It was the one platform where artists weren’t held to ransom for releasing their own music. It’s been a slow decline, having been acquired by Epic Games last year. When that happened I winced a little bit, because it was like, well, what’s going to happen now? It felt quite hard to trust that they were going to do anything good with it.
Obviously, it’s hard. I think the solution is for more people to run independent projects, co-ops, and small ventures. Then to find new niches and new ways for musicians to exist and coexist in music, get their releases out, and think of new solutions to support artists and labels. At times like this, it’s always a bit, you know, dampened by this constant flow of like, oh, we’ve got this platform that’s made for artists and now it’s gone, but people always find ways. Bandcamp came out of a need for a new kind of platform. So without it, maybe there’ll be something else that will come out of the new need.
I’m hopeful. I like to be hopeful.