Book Review: Supercollider for the Creative Musician

Dom Aversano

Supercollider for the creative musician.

Several years ago a professor of electronic music at a London University advised me not to learn Supercollider as it was ‘too much of a headache’ and it would be better just to learn Max. I nevertheless took a weekend course, but not long after my enthusiasm for the language petered out. I did not have the time to devote to learning and was put off by Supercollider’s patchy documentation. It felt like a programming language for experienced programmers more than an approachable tool for musicians and composers. So instead I learned Pure Data, working with that until I reached a point where my ideas diverged from anything that resembled patching cords, at which point, I knew I needed to give Supercollider a second chance.

A lot had changed in the ensuing years, and not least of all with the emergence of Eli Fieldsteel’s excellent YouTube tutorials. Eli did for SuperCollider what Daniel Shiffman did for Processing/P5JS by making the language accessible and approachable to someone with no previous programming experience. Just read the comments for Eli’s videos and you’ll find glowing praise for their clarity and organisation. This might not come as a complete surprise as he is an associate professor of composition at the University of Illinois. In addition to his teaching abilities, Eli’s sound design and composition skills are right up there. His tutorial example code involves usable sounds, rather than simply abstract archetypes of various synthesis and sampling techniques. When I heard Eli was publishing a book I was excited to experience his teaching practice through a new medium, and curious to know how he would approach this.

The title of the book ‘SuperCollider for the Creative Musician: A Practical Guide’ does not give a great deal away, and is somewhat tautological. The book is divided into three sections: Fundamentals, Creative Techniques, and Large-Scale Projects.

The Fundamentals section is the best-written introduction to the language yet. The language is broken down into its elements and explained with clarity and precision making it perfectly suited for a beginner, or as a refresher for people who might not have used the language in a while. In a sense, this section represents the concise manual Supercollider has always lacked. For programmers with more experience, it might clarify the basics but not represent any real challenge or introduce new ideas.

The second section, Creative Techniques, is more advanced. Familiar topics like synthesis, sampling, and sequencing, are covered, as well as more neglected topics such as GUI design. There are plenty of diagrams, code examples, and helpful tips that anyone would benefit from to improve their sound design and programming skills. The code is clear, readable, and well-crafted, in a manner that encourages a structured and interactive form of learning and makes for a good reference book. At this point, the book could have dissembled into vagueness and structural incoherence, but it holds together sharply.

The final section, Large-Scale Projects, is the most esoteric and advanced. Its focus is project designs that are event-based, state-based, or live-coded. Here Eli steps into a more philosophical and compositional terrain, showcasing the possibilities that coding environments offer, such as non-linear and generative composition. This short and dense section covers the topics well, providing insights into Eli’s idiosyncratic approach to coding and composition.

Overall, it is an excellent book that every Supercollider should own. It is clearer and more focused than The Supercollider Book, which with multiple authours is fascinating, but makes it less suitable for a beginner. Eli’s book makes the language feel friendlier and more approachable. The ideal would be to own both, but given a choice, I would recommend Eli’s as the best standard introduction.

My one criticism — if it is a criticism at all — is that I was hoping for something more personal to the authour’s style and composing practice, whereas this is perhaps closer to a learning guide or highly-sophisticated manual. Given the aforementioned lack of this in the Supercollider community Eli has done the right thing to opt to plug this hole. However, I hope that this represents the first book in a series in which he delves deeper into Supercollider and his unique approach to composition and sound design.



Eli Fieldsteel - authour of Supercollider for the Creative Musician
Eli Fieldsteel / authour of Supercollider for the Creative Musician

Click here to order a copy of Supercollider for the Creative Musician: A Practical Guide

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at the Liner Notes.

An interview with Blockhead creator Chris Penrose

Dom Aversano

A screenshot from Blockhead

Blockhead is an unusual sequencer with an unlikely beginning. In early 2020, as the pandemic struck, Chris Penrose was let go from his job in the graphics industry. After receiving a small settlement package, he combined this with his life savings and used it to develop a music sequencer that operated in a distinctively different manner from anything else available. In October 2023, three years after starting the project, he was working full-time on Blockhead, supporting the project through a Patreon page even though the software was still in alpha mode.

The sequencer has gained a cult following made up of fans as much as users, enthusiastic to approach music-making from a different angle. It is not hard to see why, as in Blockhead everything is easily malleable, interactive, and modulatable. The software works in a cascade-like manner, with automation, instruments, and effects at the top of the sequencer affecting those beneath them. These can be shifted, expanded, and contracted easily.

When I speak to Chris, I encounter someone honest and self-deprecating, all of which I imagine contributes to people’s trust in the project. After all, you don’t find many promotional videos that contain the line ‘Obviously, this is all bullshit’. There is something refreshingly DIY and brave about what he is doing, and I am curious to know more about what motivated him, so arranged to talk with Chris via Zoom to discuss what set him off on this path.

What led you to approach music sequencing from this angle? There must be some quite specific thinking behind it.

I always had this feeling that if you have a canvas and you’re painting, there’s an almost direct cognitive connection between whatever you intend in your mind for this piece of art and the actual actions that you’re performing. You can imagine a line going from the top right to the bottom left of the canvas and there is a connection between this action that you’re taking with a paintbrush pressing against the canvas, moving from top right down to left.

Do you think that your time in the graphics industry helped shape your thinking on music?

When it comes to taking the idea of painting on a canvas and bringing it into the digital world, I think programs like Photoshop have fared very well in maintaining that cognitive mapping between what’s going on in your mind and what’s happening in front of you in the user interface. It’s a pretty close mapping between what’s going on physically with painting on a canvas and what’s going on with the computer screen, keyboard and mouse.

How do you see this compared to audio software?

It doesn’t feel like anything similar is possible in the world of audio. With painting, you can represent the canvas with this two-dimensional grid of pixels that you’re manipulating. With audio, it’s more abstract, as it’s essentially a timeline from one point to another, and how that is represented on the screen never really maps with the mind. Blockhead is an attempt to get a little closer to the kind of cognitive mapping between computer and mind, which I don’t think has ever really existed in audio programs.

Do you think other people feel similarly to you? There’s a lot of enthusiasm for what you doing, which suggests you tapped into something that might have been felt by others.

I have a suspicion that people think about audio and sound in quite different ways. For many the way that digital audio software currently works is very close to the way that they think about sound, and that’s why it works so well for them. They would look at Blockhead and think, well, what’s the point? But I have a suspicion that there’s a whole other group of people who think about audio in a slightly different way and maybe don’t even realise as there has never been a piece of software that represents things this way.

What would you like to achieve with Blockhead? When would you consider it complete?

Part of the reason for Blockhead is completely selfish. I want to make music again but I don’t want to make electronic music because it pains me to use the existing software as I’ve lost patience with it. So I decided to make a piece of audio software that worked the way I wanted it. I don’t want to use Blockhead to make music right now because it’s not done and whenever I try to make music with Blockhead, I’m just like, no, this is not done. My brain fills with reasons why I need to be working on Blockhead rather than working with Blockhead. So the point of Blockhead is just for me to make music again.

Can you describe your approach to music?

The kind of music that I make tends to vary from the start. I rarely make music that is just layers of things. I like adding little moments in the middle of these pieces that are one-off moments. For instance, a half-second filter sweep in one part of the track. To do that in a traditional DAW, you need to add a filter plugin to the track. Then that filter plugin exists for the entire duration of the track, even if you’re just using it for one moment. It’s silly that it has to exist in bypass mode or 0% wet for the entire track, except in this little part where I want it. The same is true of synthesizers. Sometimes I want to write just one note from a synthesizer at one point in time in the track.

Is it possible for you to complete the software yourself?

At the current rate, it’s literally never going to be finished. The original goal with Patreon was to make enough money to pay rent and food. Now I’m in an awkward position where I’m no longer worrying about paying rent, but it’s nowhere near the point of hiring a second developer. So I guess my second goal with funding would be to make enough money to hire a second person. I think one extra developer on the project would make a huge difference.

It is hard not to admire what Chris is doing. It is a giant project, and to have reached the stage that it has with only one person working on it is impressive. Whether the project continues to grow, and whether he can hire other people remains to be seen, but it is a testament to the importance of imagination in software design. What is perhaps most attractive of all, is how it is one person’s clear and undiluted vision of what this software should be, which has resonated with so many people across the world.

If you would like to find out more about the Blockhead or support the project you can visit its Patreon Page.

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.

Ask Me Anything about Sound Design for Games with Eduardo Pesole

Ask Me Anything about Sound Design for Games with Eduardo Pesole

The persistence of misogyny in music technology

Dom Aversano

DJ Isis photographed by Vera de Kok
DJ Isis photographed by Vera de Kok

Last week the British House of Commons Women and Equalities Committee released their report into misogyny in music. It reached a stark and clear conclusion.

In this Report we have focused on improving protections and reporting mechanisms, and on necessary structural and legislative reforms. The main problem at the heart of the music industry is none of these; it is the behaviour of men—and it is almost always men.

Although the report is specific to the United Kingdom many of its findings could apply to other countries. One universal problem is the tendency for some men to view misogyny as a woman’s problem, even though men have greater power to eliminate it. For those of us working with music technology, this needs to be taken to heart, as the field comes out very badly in the report, especially concerning the gender imbalance for music producers, engineers, and songwriters.

In 2022, just 187 women and non-binary people were credited as either producer or engineer on the top 50 streamed tracks in 14 genres, compared to 3,781 men. Of all songwriters and composers who received a royalty in 2020 from their music being streamed, downloaded, broadcast, or performed, only one in six (16.7%) were women.

Music technology education does not fare better.

Participation rates show that music technology courses still show a stark gender imbalance, reflecting the lack of female representation in the production workforce, despite the technology’s increasing importance to modern musicians.

After reading this I was curious to know how Music Hackspace shaped up in this regard. While far from a comprehensive analysis, I decided to count the number of female and male teachers on the Music Hackspace Website and discovered 32 female teachers (35%) and 58 male teachers (65%). This is far from equal, but at least better than the ‘stark gender imbalance’ mentioned in the report. However, until it is equal, it is not good enough.

On a personal note, when writing this blog I try to keep bias and discrimination at the front of my mind, but I am aware I interview more men than women. This is more complicated than simply my intentions. When invited for an interview men have generally been more forthcoming than women and tend to be easier to locate and contact, especially given they often have more prominence within the musical world. It is not hard to imagine why women might be more reluctant to subject themselves to public attention, as they are criticised more than men and taken less seriously. In the government report, many female artists and managers were regularly mistaken for girlfriends.

The misogyny women experience in the public eye was grotesquely demonstrated recently when X/Twitter was flooded with deepfakes porn images of singer Taylor Swift just a few days before this year’s Grammy Awards. One does not have to be a music superstar to be subjected to such abuse. Last year in the Spanish town of Almendralejo more than 28 girls aged from 11 to 17 had AI-generated naked images created of them, with 11 local boys having been involved in the creation and circulation of the images, demonstrating that such threats now exist across all levels of society.

This is to say nothing of the wider patriarchal socio-political forces at work. This year the world will again be subjected to a presidential run by the convicted sex offender Donald Trump, who has bragged about sexually assaulting women and described his daughter as “voluptuous”. He is not alone, with social media-savvy men like Jordan Peterson and Andrew Tate promoting their misogynistic ideas to mass audiences of boys and men. These misogynistic ideas have been demonstrated to be algorithmically amplified by platforms such as TikTok such that Gen Z boys are more likely than Baby Boomers to believe that feminism is harmful.

Music should set a better example and act as a counter-cultural force against these movements. Historically, music has been a driver of social change, as one can create and share ideas with millions of people across the world rapidly. Women’s participation in this artistic activity should be equal to that of men, and for as long as it is not, it is men’s responsibility to help redress the power imbalance. In this respect, I will finish with the same quote from the House of Commons report, which lays out the root of the problem starkly.

The main problem at the heart of the music industry (…) is the behaviour of men—and it is almost always men.

Click here for the full report Misogyny in Music by the British House of Commons Women and Equaliti

Move slow and create things

Dom Aversano

Over Christmas I took a week off, and no sooner had I begun to relax than an inspiring idea came to mind for a generative art piece for an album cover. The algorithm needed to make it was clear in my mind, but I did not want to take precious time away from family and friends to work on it. Then a thought occurred — could I build it quickly using ChatGPT?

I had previously resisted using Large Language Models (LLMs) in my projects for a variety of reasons. Would outsourcing coding gradually deskill me? Whose data was the system trained on and was I participating in their exploitation? Is the environmental effect of using such computationally intense technology justifiable?

Despite my reservations I decided to try it, treating it as an experiment that I could stop at any point. Shortly prior to this, I had read a thought-provoking online comment questioning whether manual coding might seem as peculiar and antiquated to the future as programming in binary does now. Could LLMs help make computers less rigid and fixed, opening up the world of programming to anyone?

While I had previously used ChatGPT to create some simple code for Supercollider, I had been unimpressed by the results. For this project, however, the quality of the code was different. Every prompt returned P5JS code that did exactly what I intended, without the need for clarification. I made precisely what I envisioned in less than 30 minutes. I was astonished. It was not the most advanced program, but neither was it basic.

Despite the success, I felt slightly uneasy. The computer scientist Grady Booch wrote that ‘every line of code represents an ethical and moral decision.’ It is tempting to lose sight of this amid a technological culture steeped in a philosophy of ‘move fast and break things’ and ‘it’s better to ask for forgiveness than permission’. So what specifically felt odd?

I arrived at what I wanted without much of a journey, learning little more than how to clarify my ideas to a machine. This is a stark contrast to the slow and meticulous manner of creation that gradually develops our skills and thinking, which is generally considered quintessential to artistic activity. Furthermore, although the arrival is quicker the destination is not exactly the same, since handcrafted code can offer a representation of a person’s worldview, whereas LLM code is standardised.

However, I am aware that historically many people — not least of all in the Arts and Crafts movement — expressed similar concerns, and one can argue that if machines dramatically reduce labourious work it could free up time for creativity. Removing the technical barrier to entry could allow many more people’s creative ideas to be realised. Yet efficiency is not synonymous with improvement, as anyone who has scanned a QR-code menu at a restaurant can attest.

The idea that LLMs could degrade code is plausible given that they frequently produce poor or unusable code. While they will surely improve, to what degree is unknown. A complicated project built from layers of machine-generated code may create layers of problems: short-term and long-term. Like pollution, its effects might not be obvious until they accumulate and compound over time. If LLMs are trained on LLM-generated code it could have a degradative effect, leading to a Model Collapse.

The ethics of this technology are equally complicated. The current lack of legislation around consent on training LLMs means many people are discovering that their books, music, or code has been used to train a model without their knowledge or permission. Beyond legislating, a promising idea has been proposed by programmer and composer Ed Newton-Rex, who has founded a company called Fairly Trained, which offers to monitor and certify different LLMs, providing transparency on how they were trained.

Finally, while it is hard to find accurate assessments of how much electricity these systems use, some experts predict they could soon consume as much electricity as entire countries, which should not be difficult to imagine given that the Bitcoin blockchain is estimated to consume more electricity than the whole of Argentina.

To return to Grady Booch’s idea that ‘every line of code represents an ethical and moral decision’ one could extend this to every interaction with a computer represents an ethical and moral decision. As the power of computers increases so should our responsibility, but given the rapid increases in computing power, it may be unrealistic to expect our responsibility to keep pace. Taking a step back to reflect does not make one a Luddite, and might be the most technically insightful thing to do. Only from a thoughtful perspective can we hope to understand the deep transformations occurring, and how to harness them to improve the world.

Steve Reich’s exploration of technology through music

Dom Aversano

Photo by Peter Aidu

New York composer Steve Reich did not just participate in the creation of a new style of classical music, he helped establish a new kind of composer. Previously, the word composer evoked an archetype of a quill-wielding child prodigy who had composed several symphonies before adulthood — finding perhaps its purest embodiment in the example of Amadeus Mozart — whereas Reich represented a composer who gradually and determinedly developed their talent in a more relatable manner. At the same age that Mozart was on his deathbed composing his Requiem, Reich was struggling to establish himself in New York, driving taxis to make ends meet.

A key source of Reich’s inspiration was atypical of the classical music tradition, in which composers tended to draw inspiration from nature, religion, romantic love, classical literature, and other art forms; by contrast, Reich’s career was ignited by ideas he derived from electronic machines.

In what is now musical folklore, the young composer set up two tape recorders in his home studio with identical recordings of the Pentecostal preacher Brother Walter proclaiming ‘It’s gonna rain’. Reich pressed play on both machines and to his astonishment found the loops were perfectly synchronised. That initial synchronisation then began to drift as one machine played slightly faster than the other, causing the loops to gradually move out of time, thereby giving rise to a panoply of fascinating acoustic and melodic effects that would be impossible to anticipate or imagine without the use of a machine. The experiment formed the basis for Reich’s famous composition It’s Gonna Rain and established the technique of phasing (I have written a short guide to Reich’s three forms of phasing beneath this article).

While most composers would have considered this a curious home experiment and moved on, Reich, ever the visionary, sensed something deeper that formed the basis for an intense period of musical experimentation lasting almost a decade. In a video explaining the creation of the composition, It’s Gonna Rain, he describes the statistical improbability of the two tape loops having been aligned.

And miraculously, you could say by chance, you could say by divine gift, I would say the latter, but you know I’m not going to argue about that, the sound was exactly in the centre of my head. They were exactly lined up.

To the best of my knowledge, it is the first time in classical music that someone attributed intense or divine musical inspiration to an interaction with an electronic machine. How one interprets the claim of divinity is irrelevant, the significant point is it demonstrates the influence of machines on modern music not simply as a tool, but as a fountain of ideas and profound inspiration.

In a 1970 interview with fellow composer Michael Nyman, Reich described his attitude and approach to the influence of machines on music.

People imitating machines was always considered a sickly trip; I don’t feel that way at all, emotionally (…) the kind of attention that kind of mechanical playing asks for is something we could do with more of, and the “human expressive quality” that is assumed to be innately human is what we could do with less of now.

While phasing became Reich’s signature technique, his philosophy was summed up in a short and fragmentary essay called Music as a Gradual Process. It contained insights into how he perceived his music as a deterministic process, revealed slowly and wholly to the listener.

I don’t know any secrets of structure that you can’t hear. We all listen to the process together since it’s quite audible, and one of the reasons it’s quite audible is because it’s happening extremely gradually.

Despite the clear influence of technology on Reich’s work, there also exists an intense criticism of technology that clearly distinguishes his thinking from any kind of technological utopianism. For instance, Reich has consistently been dismissive of electronic sounds and made the following prediction in 1970.

Electronic music as such will gradually die and be absorbed into the ongoing music of people singing and playing instruments.

His disinterest in electronic sounds remains to this day, and with the exception of the early work Pulse Music (1969), he has never used electronically synthesised sounds. However, this should not be confused with a sweeping rejection of modern technology or a purist attitude towards traditional instruments. Far from it.

Reich was an early adopter of audio samplers, using them to inset short snippets of speech and sounds into his music from the 1980s onwards. A clear demonstration of this can be found in his celebrated work Different Trains (1988). The composition documents the long train journeys Reich took between New York and Los Angeles from 1938 to 1941 when travelling between his divorced parents. He then harrowingly juxtaposed this with the train journeys happening at the same time in Europe, where Jews were being transported to death camps.

For the composition, Reich recorded samples of his governess who accompanied him on these journeys, a retired pullman porter who worked on the same train line, and three holocaust survivors. He transcribed their natural voice melodies and used them to derive melodic material for the string quartet that accompanies the sampled voices. This technique employs technology to draw attention to minute details of the human voice, that are easily missed without this fragmentary and repetitive treatment. As with Reich’s early composition, It’s Gonna Rain, it is a use of technology that emphasises and magnifies the humanity in music, rather than seeking to replace it.

Having trains act as a source of musical and thematic inspiration demonstrates, once again, Reich’s willingness to be inspired by machines, though he was by no means alone in this specific regard. There is a rich 20th-century musical tradition of compositions inspired by trains, including works such as jazz composer Duke Ellington’s Take the A Train, Brazilian composer Heitor Villa Lobos’s The Little Train of the Caipira, and the Finnish composer Kaija Saariaho’s Stilleben.

Reich’s interrogation of technology finally reaches its zenith in his large-scale work Three Tales — an audio-film collaboration with visual artist Beryl Korot. It examines three technologically significant moments of the 20th century: The Hindenburg disaster, the atom bomb testing at Bikini, and the cloning of Dolly the sheep. In Reich’s words, they concern ‘the physical, ethical, and religious nature of the expanding technological environment.’ As with Different Trains, Reich recorded audio samples of speech to help compose the music, this time using the voices of scientists and technologists such as Richard Dawkins, Jaron Lanier, and Marvin Minsky.

These later works have an ominous, somewhat apocalyptic feel, hinting at the possibility of a dehumanised and violent future, yet while maintaining a sense of the beauty and affection humanity contains. Throughout his career, Reich has used technology as both a source of inspiration and a tool for creation in a complicated relationship that is irreducible to sweeping terms like optimistic or pessimistic. Instead, Reich uses music to reflect upon some of the fundamental questions of our age, challenging us to ask ourselves what it means to be human in a hi-tech world.


A short guide to three phasing techniques Reich uses

There are three phasing techniques that I detect in Steve Reich’s early music which I will briefly outline.

First is a continuous form of phasing. A clear demonstration of this is in the composition It’s Gonna Rain (1965). With this phasing technique, the phase relationship between the two voices is not measurable in normal musical terms (e.g., ‘16th notes apart’ etc) but exists in a state of continuous change making it difficult to measure at any moment. An additional example of this technique can be heard in the composition Pendulum Music.

The second is a discrete form of phasing. A clear demonstration of this is the composition Clapping Music (1972). With this phasing technique, musicians jump from one exact phase position to another without any intermediary steps, making the move discrete rather than gradual. Since the piece is in a time cycle of 12 there are the same number of possible permutations, each of which is explored in the composition, thereby completing the full phase cycle.

The third is a combination of continuous and discrete phasing. A clear demonstration of this is Piano Phase (1967). With this phasing technique, musicians shift gradually from one position to another, settling in the new position for some time. In Piano Phase one musician plays slightly faster than the other until they reach their new phase position which they settle into for some time before making another gradual shift to another phase position. An additional example of this technique can be heard in the composition Drumming.

Music Hackspace is running an online workshop Making Generative Phase Music with Max/MSP Wednesday January 17th 17:00 GMT 

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work in his Substack publication, Liner Notes.

TouchDesigner Meetup / Immersive Theatre

TouchDesigner Meetup / Immersive Theatre

Max Meetup: Performing Live with Max

Max Meetup: Performing Live with Max

Ask Me Anything about Max with Umut Eldem

Ask Me Anything about Max with Umut Eldem

Ask Me Anything about Sound Design for Games with Eduardo Pesole

Ask Me Anything about Sound Design for Games with Eduardo Pesole