How to design a music installation – an interview with Tim Murray-Browne (part 1)

Dom Aversano

How to design a music installation - an interview with Tim Murray-Browne (part 1)

I met artist and coder Tim Murray-Browne just over a decade ago, briefly after he was made artist in residence for Music Hackspace. Tall, thin, with a deep yet softly-spoken voice, he stood up and gave a presentation to an audience of programmers, academics, musicians, and builders, in a room buzzing with anticipation. The setting was a dingy studio in Hoxton, East London, prior to the full-on gentrification of the artistic neighbourhood.

Tim’s idea for a project was bold: He had no idea. Or to be more precise, his idea was to have no idea. Instead, the idea would emerge from a group. There were quizzical looks in the audience and questions to confirm indeed the idea was to have no idea. For an artistically audacious idea, this was a good audience, comprised as it was of open-minded, radical, and burningly curious people. By the meeting’s end an unspoken consensus of ‘let’s give this a go’ seemed to have quietly been reached.

Tim’s faith in his concept was ultimately vindicated since the installation that emerged from this process, Cave of Sounds, still tours to this day. Created by a core group of eight people — myself one of them — it has managed to stay relevant amid a slew of socio-political and technological changes. As an artist, Tim has continued to make installations, many focusing on dance, movement, and the human body, as well as more recently, AI.

I wanted to reflect back on this last decade, to see what had been learned, what had changed, what the future might hold, and above all else, how one goes about creating an installation.

What do you think are the most important things to consider when building an interactive installation?

First, you need some kind of development over time. I used to say narrative though I’m not sure if that is the right word anymore, but something needs to emerge within that musical experience. A pattern or structure that grows. Let’s say someone arrives by themselves, maybe alone in a room, and is confronted with something physical, material, or technological, and the journey to discover what patterns emerge has begun. Even though an installation is not considered a narrative form, any interaction is always temporal.

Second, has to do with agency. It’s very tempting as an artist to create a work and have figured out exactly what experience you want your audience to have and to think that that’s going to be an interactive experience even though you’ve already decided it. Then you spend all your time locking down everything that could happen in the space to make sure the experience you envisaged happens. I think if you do this you may as well have made a non-interactive artwork, as I believe the power of interactivity in art lies in the receiver having agency over what unfolds.

Therefore, I think the question of agency in music is fundamental. When we are in the audience watching music a lot of what we get out of it is witnessing someone express themselves skillfully. Take virtuosity, that comes down to witnessing someone have agency in a space and really do something with it.

How exactly do you think about agency in relation to installations?

In an interactive installation, it’s important to consider the agency of the person coming in. You want to ask, how much freedom are we going to give this person? How broad is the span of possible outcomes? If we’re doing something with rhythm and step sequencing are we going to quantise those rhythms so everything sounds like a techno track? Or are we going to rely on the person’s own sense of rhythm and allow them to decide whether to make it sound like a techno track or not?

It all comes down to the question of what is the point of it being interactive. While it is important to have some things be controllable, a lot of the pleasure and fun of interactive stuff is allowing for the unexpected, and therefore I find the best approach when building an installation is to get it in front of unknown people as soon as possible. Being open to the unexpected does not mean you cannot fail. An important reason for getting a work in front of fresh people is to understand how far they are getting into the work. If they don’t understand how to affect and influence the work then they don’t have any agency, and there won’t be any sense of emergence.

Can you describe music in your childhood? You say you sang in choirs from the age of six to twelve. What was your experience of that?

At the time it burnt me out a little but I’m very thankful for it today. It was very much tied to an institution. It was very institutional music and it was obligatory. I was singing in two to three masses a week and learning piano and percussion. I stopped when I was about 13. I had a few changes in life, we moved country for a little bit and I went to a totally different kind of school and environment. It wasn’t until a few years later that I picked up the piano again, and only really in the last couple of years have I reconnected with my voice.

Your PhD seemed to be a turning point for you and a point of re-entry into music. Can you describe your PhD, and how that influenced your life?

I began doing a PhD looking at generative music, and as I was trying to figure out what the PhD would be I had an opportunity to do a sound installation in these underground vaults in London Bridge Station with a random bunch of people in my research group. They were doing an installation there and someone had some proximity sensors I could use. There was an artist who had some projections which were going up and I made a generative soundscape for it. Being in the space and seeing the impact of that work in a spatial context really shifted my focus. I felt quite strongly that I wanted to make installations rather than just music, and I reoriented my PhD to figure out how to make it about that. I was also confronted with the gulf of expectation and reality in interactive art. I thought the interactivity was too obvious if anything, but then as I sat and watched people enter the space, most did not even realise the piece was interactive.

How do these questions sit with you today?

From an academic perspective, it was a really terrible idea because a PhD is supposed to be quite focused, and I was questioning how can you make interactive music more captivating. I had this sense in my head of what an interactive music experience could be, and it was as immersive, durational and gripping as a musical experience. Nearly every interactive sound work I was finding ended up being quite a brief experience – you kind of just work out all the things you can do and then you’re done.

I saw this pattern in my own work too. My experience in making interactive sound works was much more limited back then, but I saw a common pattern of taking processes from recorded music and making it interactive. My approach was to ask ‘Well what is music really? why do we like it?’ and all kinds of answers come up about emerging structures, belonging, and self-expression, so then the question was how can we create interactive works that embody those qualities within the interactivity itself.

What it left me with was not such a clear pathway into academia, because I hadn’t arrived at some clear and completed research finding, but what I had done was immersed myself so fundamentally in trying to answer this question, how can I make captivating interactive music experiences?f

What did you find?

On the question of interaction with technology, I think the most fundamental quality of technology is interaction, human-computer interaction. How is it affecting us? How are we affecting it? How does that ongoing relationship develop?

There is so much within those questions, and yet interactivity is often just tacked on to an existing artwork or introduced in a conventional way because that is how things are done. In fact, the way you do interactivity says a lot about who you are and how you see the world. How you design interaction is similar to how you make music, there are many ways, and each has a political interpretation that can be valuable in different contexts.

Who has influenced you in this respect?

The biggest influence on me at the point where I’d finished my PhD and commenced Cave of Sounds was the book Musicking by Christopher Small.

The shift in mindset goes from thinking that music is something being done by musicians on a stage and being received by everyone else around them, to being a collective act that everybody’s participating in together, and that if there weren’t an audience there to receive it the musician couldn’t be participating in the same music.

What I found informative is to take a relativist view on different musical cultures. Whether it is a rock concert, classical concert, folk session, or jazz jam, you can think of them as being different forms of this same thing, just with different parameters of where the agency is.

For instance, if you’re jamming with friends in a circle around a table there is space for improvisation and for everybody to create sound. This has an egalitarian nature to it. Whereas with an orchestra there is little scope for the musicians to choose what notes they play, but a huge scope for them to demonstrate technical virtuosity and skill, and I don’t think there’s anything wrong with that. I love orchestral music. I think there is beauty to the coordination and power. I can see how it could be abused politically, but it’s still a thing that I feel in my body when I experience it, and I want to be able to access that feeling.

What I’m most suspicious about are stadium-level concerts. The idolisation of one individual on a stage with everyone in the crowd going emotionally out of control. It is kind of this demagogue/mob relationship. People talk about these Trump rallies as if they’re like rock concerts, and it’s that kind of relationship that is abused politically.

You can read more of this interview in Part 2 which will follow shortly, where we discuss the future of music as well as practical advice for building installations. To find out more about Tim Murray-Browne you can visit his website or follow him on Instagram or X.

Exploring the 2023 MIDI Innovation Awards

Dom Aversano

In Jaron Lanier’s cult classic technology manifesto, You Are Not a Gadget, the writer outlines a concept he calls lock-in, which he defines as when via mass adoption a technology becomes so deeply embedded into a culture that it becomes difficult to either improve or remove it without massive effort, even if its design is fundamentally flawed. The British road system exemplifies this, with its twisting and turning narrow lanes designed for horse-drawn carts, which while somewhat charming relics of a bygone era, can make it impossible to create separated bike lanes without bulldozing entire sections of cities. Lanier provides another example, MIDI, which he perceives as a reductive and delimiting music language that shrinks our conception of music to the functioning of keyboards, yet nevertheless, that he predicts will persist as a language well into the future, due to the huge work it would take to extract it from our musical infrastructure.

More than a decade after Lanier’s book was published his prediction that MIDI would persist is vindicated, however, Lanier may have underestimated the extent to which, unlike the British road system, MIDI has the capacity to transform itself without a major uprooting, which is the intention of MIDI 2.0. Anyone who has followed the non-starter of Web 3.0 will know that technological advancement requires more than adding a number and a decimal place to an existing technology. However, this new version of MIDI offers genuinely new capabilities, such as bidirectionally, backwards compatibility, a finer resolution of detail, and the capacity for instruments to communicate with greater sophistication.

Browsing through the entrants and finalists on the MIDI Association website reminded me of the show-and-tell-type events Music Hackspace put on in its early days. There is a nice balance between slick and sophisticated products built by established companies and eccentric innovations made in a shed by a devoted individual. As is the nature of these things, most of the innovations will not make their way to the mass market (presuming they were designed for it at all) but this does not detract from the creative value of the work. It is inspiring to see people make the brave effort of taking ideas from their imagination and putting them into the real world, so providing an audience for their efforts helps motivate and stimulate this innovation, by demonstrating that it has value and importance in our culture.

For the last few days, I have had the pleasure of indulging in a kind of digital sauntering, where I have explored and browsed through the wonderful collection of innovations on display. One original-looking instrument that immediately caught my eye is the Abacusynth, which as its name suggests, is built in the style of an abacus. The synth is intended to emphasise musical timbre with its creator stating:

“Timbral modulation is arguably just as ‘musical’ as melody or rhythm, but it’s not often emphasized for someone learning music, usually due to the complexity of synthesizer interfaces”.

One aspect of its interface design that is ingenious is that by spinning the blocks it creates a modulation effect, aligning the visual and kinetic aspects of the instrument in a playful and intuitive way.

Another visually appealing instrument is the Beat Scholar, which uses a novel pizza-slice-type interface to subdivide rhythms, provoking the visual imagination and making the likes of quintuplets and septuplets subdivisions much less intimidating. It is a much more visually appealing representation of rhythm than your average piano roll sequencer, where the interface for advanced rhythms often feels like an afterthought. 

When it comes to slickness Roland’s AE-30 Aerophone Pro jumps out, with the company claiming it ‘the most fully-integrated and advanced MIDI wind controller ever created.’ It uses a saxophone key layout and mouthpiece and Bluetooth connection to free up players to move. It looks and sounds like a promising alternative to the keyboard and drum machine hegemony of electronic music, but will ultimately rely on the opinion of seasoned wind players as to whether it is adopted.

Finally, a music installation that stood out for its elegantly simple design is Sound Sculpture, which uses the collaboration and participation of a crowd to move glowing blocks around, which communicate their position to build a sequencer that creates a musical pattern. Watching people collaborate with strangers in this audio/visual artwork is particularly inspiring.

“This project utilizes 25 cubes in a space typically about the size of a half-basketball court. This spatial realization of composing, with blocks, allows multiple people to collaborate, co-compose as a community, and together create structures, rhythms, melodies, and harmonies.”

Whether you are in the depths of Argentina’s Patagonia or the buzzing metropolis of Lagos, you can join online to find out who the winners of this year’s MIDI Innovation Awards are, in a live-streamed 90 minutes show on Saturday, September 16th (10 am PDT / 1 pm EDT / 6 pm BST / 7 pm CET) that will be hosted by music Youtubers Tantacrul and Look Mum No Computer.

Creating soundtracks to transform the taste of wine

Dom Aversano

When I was asked to interview Soundpear I questioned if I was the right person for the job. The company specialises in composing music to enhance the flavour of wine at their tasting events in Greece, stating that they ‘meticulously design bespoke music to match the sensory profile of a paired product.’ I on the other hand am almost proudly philistine about wine, only drinking it at events and parties when it is put into my hand. I find the rituals and mystification of this ancient grape juice generally more off-putting than alluring, especially given how studies show doing as little as changing a cheap-looking label on a bottle for an expensive one or putting red dye into white wine is sufficient to change the opinions of even seasoned wine drinkers and sommeliers.

Yet, perhaps who better to do the interview than someone whose preferred notes are not the subtle hints of caramel, oak, or cherry, but the opening riff of John Coltrane’s Giant Steps.

Despite my scepticism, I was interested in talking to the company as the connection between music and taste is one that is rarely explored.

The three of us met on Zoom, each calling from a different European country. Asteris Zacharakis lives in Greece and is a researcher at the School of Music Studies at the Aristotle University of Thessaloniki, as well as an amateur winemaker, whereas Vasilis Paras is a music producer and multi-instrumentalist living outside of London. While the pair originally met playing in a band twenty years ago, their collaboration now involves Asteris hosting wine-tasting events in Greece, while Vasilis composes bespoke music for each variety of wine sampled.

Our conversation turns quickly to the science supporting the idea that the taste of wine can be enhanced by sound. Asteris has a passion for multimodal perception — a science that studies how our senses process in combination rather than in isolation. A famous example is the McGurk Effect, which shows that when a person sees a video of someone uttering a syllable (e.g., ga ga) but hears an overdub of a different-sounding syllable (e.g., ba ba) this sensory incongruence results in the perception of a third non-existing syllable (da da).

‘There is evidence that if you sit around a round table with no corners, it’s easier to come into agreement with your colleagues than if there are angles.’

Regarding how this could allow us to experience things differently, Asteris describes: ‘It’s been shown through research that by manipulating inputs from various senses we can obtain more complex and interesting experiences. This does not just work in the laboratory, it’s how our brains work.’

Soundpear treats the drinking of wine and listening to music as a unified experience, similar to how films unify moving images and music. I am curious how the science translates directly into Soundpear’s work since musicians and winemakers must have worked in this way for centuries — if only guided by intuition. Surely a person drinking wine on a beautiful hilltop village in the South of France while listening to a musician playing the violin is having a multimodal experience? Asteris is quick to clarify that far from being exclusive, multimodal perception occurs all the time, and is not dependent on some specialist scientific understanding.

‘Musicians become famous because they do something cognitively meaningful and potentially novel, but I doubt that in all but a few cases they’re informed by the science, and they don’t need to be. Take a painter and their art. If a neuroscientist goes and analyses what the painter is doing, they could come up with some rules of visual perception they believe the artist is taking advantage of. However, successful artists have an inherent understanding of the rules without having the scientific insight of a neuroscientist.’

Multimodal perception offers insights into how sound affects taste. For example, high notes can enhance the taste of sourness, while low notes enhance our sense of bitterness. Vasilis recounts how initially the duo had experimented with more complex recorded music but decided to strip things down and use simple electronic sounds.

‘We thought, why don’t we take this to the absolute basic level, like subtractive synthesis?”

“Let’s start with sine waves, and tweak them to see how people respond. What do they associate with sweetness? What do they associate with sourness, and how do these translate in the raw tone? Then people can generally agree certain sounds are sour. From that, we try to combine these techniques to create more complicated timbres that represent more complicated aromas, until we work our way up to a bottle of wine.’

Asteris joins in on this theme ‘For example, the literature suggests that we tend to associate a sweet taste or aroma with consonant timbres, whereas saltiness corresponds to more staccato music, and bitterness is associated with lower frequencies and rough textures. Based on this, we knew if we wanted to make the sonic representation of a cherry aroma it needed to be both sweet and sour. So we decided we should combine a dissonant component to add some sourness and at the same time a concordant component to account for the sweetness’.

They tested these sounds on each other but also experimented with participants. Asteris describes their process ‘From our library of sounds we pick some and perform experiments in an academic lab environment, to either confirm or disprove our hypotheses. Our sound-aroma correspondence assumptions are proven right in some cases, but in other cases where participants don’t agree with our assumed association, we discard it and say

“Okay, we thought that sound would be a good representative for this scent but apparently it’s not.”’

I ask if anyone can try out pairing their music with wine. Vasilis is hesitant about this, pointing out that while they have a publicly available playlist on YouTube, using it as intended would require listeners to seek out specific bottles of wine. When I ask if these could be interchangeable with other bottles he draws a comparison with film music, stating that while you could theoretically change one film score for another, it likely would clash.

At this point, I feel my initial resistance giving way. Suddenly the thought of basking in the Greek sun listening to music and drinking wine feels much more appealing — maybe being a wine philistine is overrated. What I find refreshing about the duo is they are not overplaying the science, but appear to actually be having fun combining their talents to explore a new field between taste and music. It is not the cynical banalisation of music that Spotify often promotes, using playlists with names like ‘Music for your morning coffee’. Rather than treating the experience as an afterthought Soundpear is designing its music specifically for it.

However, one question still lingers — I ask how much they believe their work can carry across cultures. Asteris accepts that neither the effect of the music nor the taste of the wine can be considered universal experiences and their appeal is largely an audience drawn from cultures considered Western. It is an honest answer, and not surprising given that rarely does either music or drink genuinely appeal to global audiences anyway, especially given that alcohol is illegal or taboo throughout much of the world.

So, what of the music?

Vasilis composes with a certain mellifluous euphoria reminiscent at times of Boards of Canada and the film composer Michael Giacchino’s soundtrack for Inside Out, though with a more minimalist timbral palette than either. The tone and mood seem appropriate for accompanying a feeling of tipsiness and disinhibition. I even detect a subtle narrative structure that I assume accompanies the opening of the bottle, the initial taste, and the aftertaste. It is not hard to imagine the music working in the context of a tasting session, and people enjoying themselves.

Soundpear appears to be attempting to broaden how we combine our senses with the goal of opening people up to new experiences, which regardless of whether you are interested in wine or not is undoubtedly interesting. It is an invitation to multidisciplinary collaboration since the principles applied to wine could just as easily be applied to coffee, architecture, or natural landscapes. The attention they bring to multimodal perception makes one question whether music could be used in new ways, and that can only be a good thing.

Music Hackspace will host a workshop with Soundpear on Friday 22nd September 6pm UK

The sound of wine: transform your wine-tasting experiences through music-wine pairing

Soundpear are planning a music-wine pairing event at the Winemakers of Northern Greece Association headquarters in Thessaloniki this October – so stay tuned for more details!

Strategies experts use to learn programming languages

Dom Aversano

"U.S. Army Photo" from the archives of the ARL Technical Library. Left: Betty Jennings (Mrs. Bartik), right: Frances Bilas (Mrs. Spence).

Learning a programming language – not least of all one’s first language – can feel intimidating, especially when observing others doing complex tasks with apparent ease. Furthermore, the circumstances in which one learns can vary greatly. One person might be 19 years old and entering a degree program with plenty of free time, while another is moonlighting on an old computer between childcare and other responsibilities. Regardless of our circumstances, we can adopt an attitude and approach to learning that allows us to make the best use of the time we have. What follows is some advice with tips from some leading music programmers and artists. 

Enjoy learning

It might sound trite, but it is essential. It is easy to motivate ourselves to do something we love. If learning is enjoyable you will do more with greater focus and energy. Create a beautiful environment to work in, inspiring projects to develop, and desirable long-term goals that are ambitious enough to keep you practising regularly. Create the conditions in which action comes naturally, since to borrow the words of Pablo Picasso, ‘Action is the foundational key to all success.’

Some people like learning by exploring and modifying existing code written by others. I envy them because I think they move faster. However I find more pleasure in learning from the ground up so I understand every line of code in my project. My preference is to follow a tutorial (e.g. Dan Shiffman’s) and do small exercises. – Tim Murray Browne

Learn through projects

We learn by doing. Tutorials are essential, but if they are not complemented with the development of projects you might experience ‘tutorial fatigue’, losing motivation and inspiration amid a constant reel of videos. Start with simple programs you can build quickly before working up to more complex ones. Small and simple is beautiful. 

I have a folder where I document and store all my ideas for projects. I write everything down in plain language describing what the program will do without any consideration for how it will work. Only after this do I give some consideration to how the program might work architecturally, before deciding if I should create it now, wait, or simply store it as an idea. Even if I never create the project, documenting my ideas demonstrates they have a value I would not entrust to just memory.

Love the one you’re with

It is better to learn one language expertly than five shallowly. Take time to decide what you want to learn rather than impulsively jumping in, after all, you might spend thousands of hours with the program so you want it to align with your character and needs. Give yourself a realistic amount of time to learn it before embarking on another language, unless you genuinely have the time to learn languages simultaneously. 

I learned Pure Data partly because I was attracted to the way it looked. That might seem superficial but I know visual aesthetics affect me, and if I was going to look at a program for hundreds or thousands of hours I wanted to like its appearance. I now prefer traditional code, but my love for Pure Data and its black-and-white simplicity taught me to think as a coder. 

Do not worry about being mocked for asking questions – asking others for help builds relationships, strengthens the community, and can even lead to employment. If people want to put you down for asking basic questions, it says more about them than about you, so always reach out! – Elise Plans

Build a physical library

A friend who worked as a programmer for a big technology company advised me not to read books about programming, arguing that learning to program is non-linear and therefore unsuited to books. This did not work for me. We all have the same access to digital information, but physical libraries reflect our interests, priorities, and values, and act as private mental spaces. 

Although Daniel Shiffman’s books and the SuperCollider book are available for free online, I bought physical copies as I find reading from paper conducive to a quieter, less distracted, and more reflective state of mind. As it happens I often read the books in a non-linear manner, reading the chapter that seems most appealing or relevant to me at that time. My library extends out in different directions, containing musicology and biography, as well as physics and philosophy, yet all feel somehow connected. 

Read other people’s code

A revelation for most people learning to code is that there is rarely a single correct way to do something. Coding is a form of self-expression that reflects our theories and models of the world, and as with all creative activities, we eventually develop a style. Reading other people’s code gives you exposure to other approaches, allowing us to understand and even empathise with their creative world. Just as when we learn a foreign language we read books to help us, reading code allows us to internalise the grammar and style of good code. 

Music technology and programming may seem limitless in possibility – but you quickly find limitations if you step outside of conventional concepts of what music has been defined as before. So if you aren’t running up against limitations, it’s likely you aren’t thinking in a way which is original or ambitious enough. – Robert Thomas

Be wary of the promises of AI

Machine learning is impressive, but as Joseph Weizenbaum’s famous program ELIZA created at MIT in 1964-66 demonstrated, we have a potentially dangerous tendency to project mental capabilities onto machines that do not exist. 

While learning SuperCollider I used ChatGPT to help with some problems. After the initial amazement at receiving coherent responses from a machine using natural language, a more sober realisation came to me that the code often contained basic errors, invented syntax, and impractical solutions that a beginner might not recognise as such. It was obvious to me that ChatGPT did not understand Supercollider in the meaningful sense that expert programmers did. 

Machine learning is undoubtedly going to influence the world hugely, and coding not least of all, but the current models have a slick manner of offering poor code with absolute confidence. 

Photo by Robin Parmar
For mistakes that I may have made – lots of them! All the time. It’s probably cliche to say, but understanding your mistakes can be the best way to learn something. Although you come to think of them less as mistakes and more as happy accidents. Sometimes typing the “wrong” value can actually give you an interesting sound or pattern that you weren’t intending but pushes you in a new creative direction. – Lizze Wilson, Digital Selves

Hopefully, some of the ideas and advice in this article have been helpful. There are of course as many ways to learn a programming language as there are people, but regardless of the path, there is always a social element to learning and collaboration. And in that spirit, if you have any advice or ideas that you would like to share, please feel free to do so in the comments below.

A guide to seven powerful programs for music and visuals

Dom Aversano

What should I learn? A guide to seven powerful programs for music and visuals.

The British saxophonist Shabaka Hutchings described an approach to learning music that reduces it down to two tasks: the first is to know what to practise, and the second is to practise it. The same approach works for coding, and though it is a simple philosophy that does not necessarily make it easy. Knowing what to practise can feel daunting amid such a huge array of tools and approaches, making it all the more important to be clear about what you wish to learn so you can then devote yourself without doubt or distraction to the task of studying.

As ever the most important thing is not the tool but the skills, knowledge, and imagination of the person using it. However, nobody wants to attempt to hammer a nail into the wall with a screwdriver. Some programs are more suited to certain tasks than others, so it is important to have a sense of their strengths and weaknesses before taking serious steps into learning them.

What follows is a summary and description of some popular programs to help you navigate your way to what inspires you most, so you can learn with passion and energy.

Pure Data

Pure Data is an open-source programming language for audio and visual (GEM) coding that was developed by Miller Puckette in the mid-1990s. It is a dataflow language where objects are patched together using cords, in a manner appealing to those who like to conceptualise programs as a network of physical objects. 

Getting started in Pure Data is not especially difficult even without any programming experience, since it has good documentation and plenty of tutorials. You can build interesting and simple programs within days or weeks, and with experience, it is possible to build complex and professional programs.

The tactile and playful process of patching things together also represents a weakness of Pure Data, since once your programs become more advanced you need increasing numbers of patch cables, and dragging hundreds – or even thousands – of them from one place to another becomes monotonous work.

Cost: free

Introductory Tutorial 

Official Website

Max/MSP/Jitter and Max for Live

Max/MSP is Pure Data’s sibling, which makes it quite easy to migrate from one program to the other, but there are significant and important differences too. The graphical user interface (GUI) for Max is more refined and allows for organising patching chords in elegant ways that help mental clarity. With Max for Live you have Max built into Ableton – bringing together two powerful programs.

Max has a big community surrounding it in which you can find plenty of tutorials, Discord channels, and a vast library of instruments to pull apart. Just as Pure Data has GEM for visualisation Max has Jitter, in which you can create highly sophisticated visuals. All in all, this represents an incredibly powerful setup for music and visuals.

The potential downsides are that Max is paid, so if you’re on a small budget Pure Data might be better suited. It also suffers from the same patch cord fatigue as Pure Data, where you can end up attaching cords from one place to another in a repetitive manner.

Cost: $9.99 per month / $399 permanent licence or $250 for students and teachers

Introductory Tutorial

Official Website


SuperCollider is an open-source language developed by James McCartney that was released in 1996, and a more traditional programming language than either Pure Data or Max. If you enjoy coding it is an immensely powerful tool where your imagination is the limit when it comes to sound design, since with as little as a single line of code you are capable of creating stunning musical outputs. 

However, SuperCollider is difficult, so if you have no programming experience expect to put in many hours before you feel comfortable. Its documentation is inconsistent and written in a way that sometimes assumes a high level of technical understanding. Thankfully, there is a generous and helpful online forum that is very welcoming to newcomers, so if you are determined to learn, do not be put off by the challenge.

An area that SuperCollider is lacking in comparison to Max and Pure Data is a sophisticated built-in environment for visuals, and although you can use it to create GUIs, they do not have the same elegance as in Max.

Cost: free

Introductory Tutorial 

Official website


Though built from SuperCollider, TidalCycles is nevertheless much easier to learn. Designed for the creation of algorithmic music, it is popular in live coding or algorave music. The language is intuitive and uses music terminology in its syntax, giving people with an existing understanding of music an easy way into coding. There is a community built around it complete with Discord channels and an active community blog.

The downsides to TidalCycles are the installation is difficult, and it is a somewhat specialist tool that does not have as broad capabilities as the aforementioned programs.

Cost: free

Introductory Tutorial 

Official Websit


P5JS is an open-source Javascript library that is a tool of choice for generative visual artists. The combination of a gentle learning curve and the ease of being able to run it straight from your browser makes it something easy to incorporate into one’s life, either as a simple tool for sketching out visual ideas or as something much more powerful that is capable of generating world-class works of art.

It is hard to mention P5JS without also mentioning Daniel Shiffmen, one of the most charismatic, humorous, and engaging programming teachers, who has rightly earned himself a reputation as such. He is the authour of a fascinating book called The Nature of Code which takes inspiration from natural systems, and like P5JS is open-source and freely available. 

Cost: free

Introductory Tutorial

Official Website


Like P5JS, Tone.js is also a Javascript library, and one that opens the door to a whole world of musical possibilities in the web browser. In the words of its creators it ‘offers common DAW (digital audio workstation) features like a global transport for synchronizing and scheduling events as well as prebuilt synths and effects’ while allowing for ‘high-performance building blocks to create your own synthesizers, effects, and complex control signals.’

Since it is web based one can get a feel for it by delving into some of the examples on offer

Cost: free

Introductory Tutorial

Official website


In TouchDesigner you can create magnificent live 3D visuals without the need for coding. Its visual modular environment allows you to patch together modules in intuitive and creative ways, and it is easy to input midi or OSC if you want to incorporate a new visual dimension to your music. To help learn there is an active forum, live meetups, and many tutorial videos on this site. While the initial stages of using TouchDesigner are not difficult, one can become virtuosic with the option of even writing your own code in the programming language Python. 

There is a showcase of work made using TouchDesigner on their website which gives you a sense of what it is capable of.

Cost: All features $2200 / pro version $600 / free for personal and non-commercial use. 

Introductory Tutorial

Official Website

‘Why I started Music Hackspace’: Jean-Baptiste Thiebaut

Dom Aversano

How I stumbled across Music Hackspace is a hazy memory. I remember turning up in a nondescript industrial estate in the hip part of East London, Hoxton. When I finally found the right door I walked into a room mid-presentation, temporarily averting the gaze of a dozen or so people who were casually sitting around listening in a state of deep concentration. They had the appearance of engineers, artists, academics, eccentrics, and hobbyists.

Afterwards, there was socialising, with beer and an English idea of what constitutes pizza. The studio had various bits of hardware scattered around and posters on the wall for the Anarchist Bookfair. I spotted one person who had set up a turntable that spun colourful bespoke mats whose patterns fed into a camera which turned it into sound. Its maker was a softly spoken, eloquent man with a French accent, dressed a bit more smartly than everyone else. It turned out, Jean-Baptiste Thiebaut, or JB, had started the group, and was knowledgeable about both music and technology. 

Jump forward a little more than a decade and the Music Hackspace has grown and gone through many changes. During that period I remember one meeting in the basement of Troyganic Cafe where about 3 people turned up, and I thought to myself ‘This is finished’. JB, maintained a more philosophical approach shrugging it off as a down phase, which was vindicated a couple of years later when the Music Hackspace found a home for itself in the prestigious Somerset House Gallery in central London – a short walk from Big Ben. 

It’s been more than a decade now that I have known JB, but in this time I realise I never knew what motivated him to start Music Hackspace, or the details of his background. So I thought an interview would be a good opportunity to delve into this.

Can you describe your background and what drew you to London?

I was born in Normandy, France, the son of a farmer. My father farmed cereals. He had fields of wheat, peas, and barley, and later started his own brewery. I got into tech, engineering, and music, and became passionate about research. I went to French conferences on the topic but I felt that the world was bigger and started following international conferences. 

I needed to speak English and be within an Anglo-Saxon community. London was close, and I had funding for research. I started in 2005 at the Center for Digital Music at Queen Mary, and I stayed in London after that, working as a software developer at Focusrite. I never returned to France, I love London! In my research centre, my colleagues were from all over the world and I loved that diversity. 

There were a lot of people who also came from small villages in their countries and wanted to see the world, and wanted to be where it’s at. The thought of returning to a society more centred around its own culture and less towards a global culture did not appeal. I wanted diversity. 

What inspired you to create Music Hackspace? 

It was 2011 and I was fresh off my PhD at Queen Mary, but I didn’t really know what to do. I was full of ideas, and I had recently become the innovation manager for Focusrite. A lot of new things were happening at that time in the music manufacturing industry: Ableton had released Push, Native Instrument Maschine, synthesizers didn’t cost the price of a house like they did 30 years before, and music software was booming. It was an interesting time to think about new products. 

I ventured into the basement of Focusrite one day, and noticed that a lot of prototyping equipment was going to scrap: PCBs of synthesisers or prototypes of Launchpad that would never be used. So I went to my manager and asked permission to repurpose it.

I was a member of the London Hackspace at the time, and I sent a message asking if someone would be interested in tinkering with those bits of equipment going to scrap. Two people responded, Martin Klang and Philip Klevberger. They came, we filled up the trunk and we said: “OK, let’s meet next Tuesday in Hoxton at the London Hackspace and invite anyone who wants to have a go, and we will just hack for fun that evening”. So we did. And the music Hackspace was born there and then!

How was the evening? 

Twenty people showed up from all walks of life: musicians wanting to create things to support their career and their artistic vision, engineers working in finance or legal firms – but musicians as well – who wanted their skills as engineers to be used for the arts. So you had these two groups that wanted something and they could achieve their goals by collaborating. 

So I had this eureka moment thinking ‘this is great, I’m also myself on both sides because I’m trained as a composer and trained as an engineer’. You can work on your art or you can work on building tools for artists and bringing the two together was my goal. I felt I found a kind of home. So I decided to honour the fact that we came from the London Hackspace and called it Music Hackspace. 

How did things progress from there?

Focusrite liked what I was doing and was very supportive, giving me the afternoon off to travel to London and a small budget to buy pizzas for everybody. Our meetings grew in popularity and eventually, we had to stop meeting at the London Hackspace because we were making too much noise. Once a week we would take over the Hackspace with a presentation and Q&A with researchers, artists and engineers. When we had to move, Martin Klang – who was doing all kinds of interesting things like building open-source effect pedals – invited us to his studio which was next door, and was big enough to host about 60 people, and that lasted until he moved out.

The Music Hackspace was initially motivated by my curiosity about innovation in music, which I think stemmed from my education and the work I was doing at Focusrite. It was not meant as a company or anything. It was a chance encounter between passionate people, on the lookout for new ways of being expressive; new ways of merging tech and art. 

I’m curious that you were a member of the London Hackspace, what drew you to that? Did hacking culture appeal to you? 

Yes, the hacking and DIY culture was very appealing to me. I finished my PhD with a lot of theoretical knowledge but not much practical knowledge. I wanted to tinker, and the London Hackspace was a very welcoming place with a lot of equipment and all sorts of fascinating, exuberant folks with wacky ideas and tremendous knowledge of electronics. I had at this point no experience whatsoever in DIY electronics, but I got into it. Arduino was just starting, and had a lot of hype. I found it fascinating that you could build your own embedded computer and augment instruments with a portable microchip that could analyse signals and embed intelligence into instruments. 

I found the values appealing too, and it was important to keep them as the community developed. The Music Hackspace was this free space that Martin and I hosted every Thursday night,  to exchange ideas, get inspired and collaborate very freely.  We had Max meetups where people would come and help each other. Someone would show a project on their screen and say, ‘Hey, this is what I’m working on’. This is my problem’. Other people would say, ‘Oh, here’s an idea that might help’.. 

Members were naturally collaborating over the years, with a few Kickstarter projects coming from the members. The Hoxton Owl guitar pedal, Touch Keys and then Bela all involved members of the Music Hackspace. Their Kickstarter videos were all filmed by Susanna Garcia, who was a director of Music Hackspace from 2014 to 2019, and runs her own film company, Mind The Film. Slowly our network grew so that when artists were visiting London from abroad, we would ask them to come and talk about their work. Over the years, we ran over 800 events, and many of our members and speakers went on to build great careers in the music industry, as researchers, entrepreneurs, artists or developers. Tadeo Sendon was also a co-director and played a major role during this time, leading the curation of sound art events at Somerset House, securing our first grants, and building connections in London’s artistic community. 

In 2020, Music Hackspace started to teach online courses, how did that happen?

In 2019, I decided to commit to the Music Hackspace full-time, and turn what was a hobby into a business. I had 10 years of experience working for various music companies then, and I wanted to channel all that experience into developing the community. I had a business plan ready for us to have our own space and run events, but as COVID happened, those plans went out the window! The only way we could run any event was to host them online, and we started doing that. 

I had just finished working at Cycling ’74 then, and Darwin Grosse agreed to sponsor Max meetups and free sessions to teach Max to beginners. That was a huge boost for our online courses because we didn’t have much of an audience outside of London, let alone the UK. Later that year, TouchDesigner also offered to sponsor meetups and courses, and more partners followed during COVID. 

I interviewed the composer and programmer Robert Thomas here recently who sees music as moving away from traditional fixed recording, and towards what he describes as a more liquid existence facilitated by software, where music can do all the things software can do. I’m curious to what degree you share that vision. 

The history of the evolution of music was part of my research thesis, in particular retracing the convergence of technology with the complexity of music. There is a direct correlation between the complexity of the tools we use and the complexity of music. Notation was designed in the 9th century to record a single melody so that it could be fixed, transmitted and archived. It was simple, just one melody line with a rudimentary notion of rhythm. And then in the 12th century, it started to become more complex, with polyphony. Then the printing press arrives, and that changes everything. Suddenly scores are everywhere, and people sell the scores, and their ubiquity allows more people to play music, and for music to be shared across countries. The printing press was a massive boost for the dissemination of music. 

Fast forward to the 21st Century and computers are now part of every aspect of the music creation process, for art gallery installations, live concerts and most music experiences. As to whether generative music, experientially, is changing music is the future? Yes, but I think it’s one of its many futures. Music that evolves based on your breathing, your surroundings, the time of the day, and other factors has definitely a place in this world! 

Kaija Saariaho’s lasting legacy on electronic music

Dom Aversano

© Kaija Saariaho Photo by Christophe Abramowitz courtesy of

The recent death of the Finnish composer Kaija Saariaho represents a great shock and loss for music, as she was so greatly admired for her pioneering spirit and irreplaceably original voice. In 2019 leading composers were polled by BBC Music Magazine to rate composers, with Saariaho emerging as the world’s best living composer. Her diverse repertoire of music touched upon many fields of music making, not least of all electronic music and computer-assisted composing. 

Despite having admired the music of Saariaho for many years it was only in 2021, two years prior to her death, that I had the opportunity to hear her music performed live for the first time. Though I had previously enjoyed listening to recordings of her orchestral compositions, I had a sense this was music that demanded a live setting to bring it truly to life. 

A ripple of excitement travelled through Valencia’s orchestral members at the prospect of playing this music, given they do not often have the opportunity to perform pieces by living composers. Despite the pandemic, as well as the repeated refrain that contemporary music just doesn’t fill concert halls, two-thirds of the seats were filled with mask-wearing audience members. 

The music took on a new life when performed. Far from difficult, it felt enticing and mesmeric, constructed from a sonic language whose subtle logic could be learned in an autodidactic manner, through osmosis and exposure. That evening, I left the concert hall with a strong sense I had only dipped my toe in the music, and that it deserved and required an entire festival. 

If one agrees with the view that her instrumental music thrives in a concert hall, this is not necessarily the case for her electronic music, which can be fully enjoyed with decent pair of headphones or speakers. While I do not pretend to be an expert on Saariaho’s music I have revisited some of her compositions since her death, focusing on a formative time when she had relocated to France from her native Finland and was working at the influential Parisian Centre for Research IRCAM (Institut de Recherche et Coordination Acoustique/Musique)I will share my thoughts and reflections about three of Saariaho’s compositions from this period in the 1980s.  

1. Vers le blanc (1982)

As far as enigmatic music goes this composition is right up there as no complete recording of the music has ever been published, with it only having been performed at a select number of concerts. However, the minute score below gives us some conception of the music. 

The score describes a transformation from one tone cluster to another (ABC -> DEF) over a gradual glissando (a glide from one pitch to another) that lasts an unusual fifteen minutes. The influence of French Spectralist music on a younger Saariaho is obvious, but to my mind, she taps into a broader zeitgeist. There is a similarity in the combination of musical simplicity and conceptual radicalism to John Cage’s 1952 composition 4′ 33″ and Steve Reich’s 1965 tape piece It’s Gonna Rain. These compositions are not simply sound worlds (or absences) to be enjoyed, but philosophical questions about the nature and direction of music. 

Despite the fact that no complete recording of the composition exists, in 2017 Landon Morrison, College Fellow in Music Theory at Harvard University, visited IRCAM in France and discovered some original audio of this composition, of which three excerpts can be heard here. Saariaho chose to use synthetic human voices, and is quoted as having said of this piece.

“(it) create(s) the illusion of an endless human voice, sustained and ‘non-breathing,’ which at times departs from its physical model”

I still wanted to hear some approximation of the entirety of the compositions, so could not resist programming something in Supercollider. While the code below is in no sense an accurate representation of Saariaho’s work, not least of all since its timbre is made from sine waves rather than synthesised voices, it does give some sense of the gradual shift that occurs within her composition.

Due to not wishing to infringe on copyright or create an inaccurate recording I am sharing this as code which can be run in SuperCollider.

					var clusterStart = [48, 57, 59].midicps; var clusterEnd = [ 52, 50, 53].midicps; 
var duration = 60*30;
{, clusterEnd, duration)
	).sum * 0.3,

2. Lichtbogen (1985/86) 

While this composition does not use electronic sounds, it is composed with the help of computers. Saariaho manages the impressive feat of bringing the often seemingly disparate worlds of computers and nature into harmony. Of the name of the composition she wrote.
‘stems from Northern Lights which I saw in the Arctic sky when starting to work on this piece’.
The sense of Finland’s deep nature combines itself with the exploratory intellectualism of Paris’s IRCAM, where computer music was researched and developed. She describes using two systems for harmony and rhythm, the FORMES system and the CRIME system. Saariaho describes how computers assisted her compositions. ‘These programmes allowed me to construct interpolations and transitions for different musical parameters… The calculated results have then been transcribed with approximations, which allows them to be playable to music notation.’

3. Stilleben (1987/88)  

I have listened to this composition many times and feel it is simultaneously direct and unknowable. The directness is in its autobiographical nature, describing a person living away from their native country, surrounded by three European languages: French, German, and Finnish, all of which had a personal significance to Saariaho. Similarly, the sounds of trains symbolise the cosmopolitan life of an internationally successful composer. It taps into a larger tradition of the use of trains in music, ranging from jazz composer Duke Ellington’s Take the A Train, to Brazilian composer Heitor Villa Lobos’s The Little Train of the Caipira, as well as New York composer Steve Reich’s Different Trains, which was written in the same year. 


The unknowable aspect of the music is the manner in which it has been arranged. It feels somewhat dreamlike, with its radiophonic nature lending it a cinematic element. Yet the recurrence of strings throughout appears to root the piece in the concert tradition. The recent rise of nationalism across Europe makes the piece feel almost controversial and political, as though it were a defence of internationalism, but I have seen no evidence of that being its original intention. Regardless of one’s interpretation, there is an expertise and maturity at work in the piece that is inspiring and beautiful, demonstrative of someone in possession of immense technical and artistic ability. 

On the future of music: an interview with composer Robert Thomas (part 2)

Dom Aversano


This is part 2 of an interview with composer Robert Thomas. The first part you can read here.

Q. I associate you with Pure Data. Is it still your primary tool? If so to what extent do you think tools shape one's work?

I use Pure Data a lot because it's a universally deployable tool, and you can make installations and all sorts of bespoke things with it. It can be used in apps, game engines, or the web. Also, as it's open source, it doesn't have any proprietary licences associated with it.

Everything I work with is either my own library, which I licence to creators, or it's open source, so BSD licensed. It's easy to work with it from a business perspective and it's well-supported and incredibly stable. From a creative perspective, I use it because it's real-time. I think to do creative things musically and sonically you have to be working in a real-time environment, not a compiled environment. It's always good to be open to happy accidents, which when you're working in real-time can happen, but when you're not it is less likely. So I don't like compiling when I'm working.

And then, what was the second part of your question?

Q. The extent to which the tool may or may not be shaping your work.

I think in some ways it doesn't shape it at all because PD is just Digital Signal Processing and you can do anything you want with it. When you make a new patch it is really just a white page - it's very open, flexible, and at the same time, terrifyingly, overwhelmingly, even dangerously open-ended.

Therefore I've developed a good 'muscle of restraint' to control exactly what I want to do. You don't want to go down all kinds of undefined meanderings in PD. It’s not the place to do that. It can be interesting to try new things and be open to accidents but there's a balance, you need a strong idea before you start coding because the program is so open.

The constraints I was talking about earlier were not constraints about what can be done with the software, they are about what is possible with the wider technology. Some aspects of personalisation can be very difficult to know, such as with contextual and emotional detection, or biometrics. There are limits to what we can and can't reliably understand, which provide creative constraints and require you to work within a framework that is sometimes relatively simple.

A good example would be when you are working with an accelerometer to understand how someone is moving, or with a GPS to work out how fast someone is going. There is only a certain amount of fidelity you can get from that. There are practical considerations, like if you have the GPS whacked way up in accuracy on a device it's going to drain the battery, and the user is quickly going to get really annoyed with the experience.

So you need to say, 'Well, OK, we're going to make a judgement that it is OK after this amount of movement, or we're going to look at the GPS over this amount of time and decide, when we think they are really moving, which will mean there's going to be a sudden change in the state of the user. What could we do with an accelerometer? We can look at how it is changing over time and try to use step detection if they are walking. There is a lot of work in getting such algorithms accurate, which places a boundary around how you creatively respond. I think that is what shapes what you do creatively.

A lot of the time what I am doing inside of DSP is relatively simple, and I try to make it as elegantly simple as I can, from the perspective of stability and reliability but also CPU and memory usage. The most desirable systems are actually the most simple and elegant. Those are the golden systems.

Another issue with tools which happens in DAWs, but especially inside of programming, is there is satisfaction in solving complex tasks or challenges in clever ways. It's very dangerous thing to get sucked into if it takes you away from creating a good musical experience, and one major problem I see in the space with a lot of projects – and I've been sucked into this on some projects as well – is that you try and create something that's a really clever system, but it sounds crap. Compared to a studio production where someone is working with off-the-shelf tools and a DAW using amazing plugins and rendering it down loads of times with intricate and polished production and writing. That's what we have to compete with. We have to be at the same level as that and better in real-time.

So if a system is super complex and really rewarding as a programming project but sounds crap, that's no good, because music lives or dies on emotional experience. If people don't enjoy it as a musical experience, it doesn't matter how clever it is. I see that as the biggest danger in this space.

Q. That's one risk I see with generative music. Algorithms are generating what is being heard, which is different from a live performer where it comes directly from them. With generative music there is the intermediary of the algorithm, and a risk of things sounding hollow and dehumanised. How does one get that deep emotional experience into the work?

Well, I think that's the art of creating algorithms from an artistic and humanist perspective, which has nothing to do with what is happening in machine-learning music at the moment. It is absolutely the opposite. I find it frustrating that the term generative AI has been co-opted by the machine-learning community because the approach to generative music that Brian Eno, Autechre, and I take is to human craft algorithms. It is the polar opposite of throwing everything into a massive deep-learning network and never knowing what is happening inside it, which is what deep-learning language models do. This space is about carefully crafting algorithms to embody as much of yourself and human expression as possible. That is what I am about.

I've heard Brian Eno talking about Steve Reich's influence on him and how he crafted the music through systems. Reich was very specific about it, which made this very interesting possibility of outputs as a generative system. So it is about the seeds and the rules.

When you're crafting things you need to listen to them for enormous amounts of time to hear all these different states, making sure it has an emotional and artistic impact. I think where things go wrong is when you try to either make generic algorithms that will make generic hip-hop, EDM, or ambient music, or even worse, a rule-based system that can make all kinds of different music. When you are that broad there is never going to be any specific quality to it. The worst is when you give up all control and completely entrust it to a network inside the system, such as deep-learning and large-language models where nobody understands what is inside the system. We are trying to make systems to understand what is inside them! How can that be an artistic endeavour? An artistic endeavour is a process of trying things and learning them. If systems are impenetrable I think it's very challenging to have an artistic interaction with them. I believe there are ways that machine learning will be helpful, but a fully automated, unsupervised, completely autonomous system is not particularly creative.

With your question, I think it is the important thing. We need to incorporate many aspects of what we do into the system; things we all do. When I do workshops with musicians I ask, when you are playing what are you doing? Okay, you're doing these types of patterns rhythmically. Oh, you're doing these kinds of intervals. You're doing these types of phrasings. You're always swinging in this way. You're like, 'I'm not, not all of the time', but when you're doing that musical thing, that idea, what are you doing?

These are the things we need the algorithm to do: to distil down a process which is both the artist and an extension of them. It embodies many aspects of the artist, but it can do things that no artist can ever do: create live music for 1000 people all over the world all at once, which is different for each person. Those are the possibilities I'm interested in.

Here are some links if you are interested to know more about Robert Thomas’s work


On the future of music: an interview with composer Robert Thomas (Part 1)​

Dom Aversano

Five years ago I attended an event at South London’s experimental venue the Iklectik Art Lab. The night was organised by Hackoustic, a group of music hackers who use acoustic objects in their work and organise events for artists to make presentations and share ideas.

The headline speaker that night was the composer and audio programmer Robert Thomas. Despite him having worked with the likes of Hans Zimmer, Massive Attack, and the Los Angeles Philharmonic Orchestra, it was my first time encountering his work. I found the presentation refreshing and original, as he expounded a unique take on a potential future of non-linear, non-deterministic, and more responsive and dynamic music.

I took no notes during the presentation, and later when I tried to search for a good outline of Robert’s thinking I couldn’t find anything that clearly outlined his thinking. So I was delighted by the opportunity to interview Robert for Music Hackspace.

In this talk, we discuss the idea that digital music, rather than being represented as ‘frozen’ recordings, could potentially be expressed better through more ‘liquid’ and dynamic algorithms. What follows is a lightly edited transcript of the first part of our conversation.

Q. You have an interesting general philosophy about musical history, could you describe it?

We are often used to thinking about music in a particular way, as a fixed form, but it does not need to be the case. By a fixed form I mean having a definitive version, like an official recording of a song. Music has only been a fixed medium for a very short period.

Thousands of years ago when prehistoric humans sang to each other music was this completely ephemeral, fluid, liquid-like thing that flowed between people. One person would have an idea, they would sing it to another, it would change slightly, and as it flowed around society it evolved.

Of course, all improvised music still does that to an extent, but over the years we started to get more adept at capturing music. First, there were markings of some kind which eventually turned into notation, and over time we formalised things and built lots of standards around our music. Only very recently, in this sort of blip, in the 100 or so years, we've thought about capturing audio from the environment by recording it and thinking about these recordings as definitive. I think this way of looking at music history is interesting because recordings just happened recently, in the last 100 or 150 years.

What is interesting now is that we can go beyond recordings and are able to do loads of really exciting and different things with music. What is frustrating is that many of the ways we create, distribute, and experience music are not taking advantage of it. If you look at the ways we capture musical ideas, such as recordings, how we work with them has not changed much since the wax cylinder: something is moving through the air, you capture it in some way, and turn it into a physical or conceptual object. The physical object might be a wax cylinder, a vinyl, or a CD, and the conceptual object a digital file, an MP3, WAV etc. All of those things are effectively the same: an unchangeable piece of audio that has a start, middle, and end.

Certain things have changed over the years, but even though we have gone into the digital realm, huge conceptual changes have not really come about. A lot of my work is about saying, well, once you go into the realm of software, actually this huge expansion of possibilities happens. You can think of the piece of music as software, which opens up a whole new world of opportunities – many of the projects I've been involved in try to take advantage of this.

It can be helpful to think from a perspective that says, ‘Well, as software things could change for each person’: it could be different at different times of the day, change based on your surroundings, the weather, the phase of the moon, how much you are moving, if it’s noisy where you are listening, change based on your driving, what country you are in, your heart rate or brain waves. I have explored all these ideas in my projects. In some ways it’s like how games use music, but in real life. By looking at how we use software we can think wider and consider, well, music could do those things too - could it be a virus for instance? It is quite an interesting thought exercise.

There are not many people exploring this; it’s a relatively small niche. Of course, some are looking at generative music more widely. Brian Eno also uses this fluid analogy, and there have been many different explorations of algorithmic music of various types. There has been a little bit of a recent surge around these ideas with Web3 and NFTs, although I think there are a lot of ethical issues with that technology.

Q. A few years ago people thought music was going to move towards becoming apps on phones. I know that you've worked on that with projects like RJDJ and the app you made for the film Inception, and people like Bjork have too. However, we are not at a point where there is mass adoption of these technologies, and therefore, from your perspective, could you say that Spotify is like the wax cylinder, but with a different distribution method?

Spotify or digital streaming more generally, does things that are different, but above the scale of the music itself. So they never go down into the song or the track. They stay at the level of the playlist, recommendation, or feed. That level of personalisation.

The wider media platforms which host film, TV, podcasts, and audiobooks have also changed, mainly through adopting newsfeed and personalization algorithms. I think these create enormous problems, which are not entirely disassociated from the much bigger problems in social media and the internet in general, although that is a much bigger subject. Overall, I think that is where change has happened, but I don't think it is positive.

These changes killed the album. TikTok, for instance, is going further and saying, it doesn't even matter what is in the rest of the song, as long as there's this little fragment that will be catchy as a meme in a 15-second video. One of the most common barriers when trying to innovate in the music industry is the challenge of dealing with inertia, and a lack of willingness for genuine fundamental change.

Q. Let’s discuss fundamental change. Let's say we looked into the actual composition. For instance, how the composition is created, so not as recorded from this point to this point, but as something generative. Could you envision it being distributed on a mass scale, where everyday people felt that it was relevant? Do you see that coming?

I wouldn't say I see it coming, but that doesn't mean that it is not possible. The reason is that people in the industry don't necessarily want it to happen, or understand how it could happen. Also, I think listeners generally don’t know about generative music, but when they do they engage a lot with it.

Technologically, there's no reason why fundamental change should not happen now because an app can be anything. The Spotify app just connects to servers and pulls down chunks of an audio file, puts them back together again, and plays them to you. A more innovative type of app, like Fantom, also pulls down chunks of audio, but it puts it together with algorithms and makes it react and adapt to aspects of your life. It's just a different technology. There are many projects that are exploring these things with varying degrees of success.

Q. Could you provide some examples, your work included, that you find innovative?

Yeah, so I would say the more innovative projects that have happened outside of conventional streaming are works like Bjork’s Biophilia app-album, and sound track-to-your-life type projects like Inception The App, the RjDj apps, and the collaborations I've done with Massive Attack for the various Fantom apps. Radiohead did some interesting projects like Universal Everything. Lifescore also makes adaptive soundtracks for your life.

Then you have what is outside of strictly entertainment, like functional music and health applications. I've done projects there with Biobeats and Mindsong, which react to EEG signals from meditation. I'm also working with a company called Wavepaths, which makes adaptive and generative music for mental health therapy with psychedelics. Then you have the many different facets of wellness and functional music, including companies like Endel, who create functional, generative, personalised music.

Q. What are the differences in making installations versus apps?

The biggest difference is, when you make an installation you control the experience completely. For instance, during the Forest for Change project, I did at Somerset House recently with Es Devlin and Brian Eno, I had a lot of precise control. As a creator you are there, you hear what the person will hear and what the speakers are like, you know the technology and do not have to build it for distribution. When you see people using it you know if they're getting confused or whether they understand the interaction. When you do an installation you have control, similar to a live show.

When you make a distributed experience, especially apps and games, you may not know exactly what the player or person is doing, if they are confused, what state they are in – all of these different things. That's the biggest difference. So it is much more ambitious to make distributed things, but I find it more exciting. When we were working on Inception The App, we got these amazing emails from people telling us about how it created the perfect soundtrack for their life. For instance, when they were skiing down mountains with the music dynamically changing.

For me, those are the really amazing projects. I remember when I used to listen to an old-school iPod shuffle, and it would just happen to play the perfect music as I started to go for a run, which seemed to be the soundtrack for that moment. Lots of the projects I have been involved with are about trying to make that happen, but by intent, and controlling it artistically.

When you hear from someone for whom that happened that's amazing, as they have not gone to an installation where everything is controlled and they have expectations, but instead, it happened in their everyday life. It’s a much more personal interaction in people's lives. Those are the most exciting things, but they are harder and way more ambitious.

Q. Yet, you create new ways for people to experience music.

It is working in such a way that you go off the rails of what’s a ‘normal’ musical experience. Instead of staying on previously laid rail tracks where you can only go where someone has gone before, I throw down the tracks in front of me as I go. It can get a bit intense!

David Bowie said that you need to be a bit out of your depth to be doing something good. It is then that you know you are probably doing something good, or at least interesting. I think the balance is to never be so ambitious that you can’t maintain musicality. Bowie was completely right in that you need to go beyond where you're comfortable. You have to be slightly uncomfortable in the creative process to do something good, and I think he did that at a number of points in his life in various ways, and not just with technology. He completely anticipated many issues around the devaluation of music.

I think it's a privilege to be working in this area because you're seeing the edge of where we are. There will always be challenges and constraints in what can and can't be done, but constraints are what make good creativity.

A lot of the problem with the music-making process at the moment is that we have too many technological choices. You can make a track in a normal DAW with loads of plugins that you could use in many different ways, and then freeze them and turn them into audio and use more plugins on that, and then mix them. The possibilities become overwhelming.

So with all these technological options people often say, ‘OK, well I'm going to limit my creative possibilities artificially’. Artificially bring them down. What I do – which I think is different – is I go to a place artistically and conceptually where it is already very hard to achieve my ideas, so I don't have the freedom to limit myself. I move my creative, conceptual aspirations into a space which is constrained creatively because it's innovative, which I think is a much healthier thing to do than imposing arbitrary, artificial constraints. Although the hard thing is it means you need to become technically aware in order to do it.

The second part of this interview can be found here.

Welcoming writer Dom Aversano: exploring the interaction between technology, music, and globalisation.

Dom Aversano


I would like to briefly introduce myself as I will be creating a series of blog posts for Music Hackspace. I am a composer, percussionist, and writer with a particular interest in how globalisation and technology influence music. As I am convinced of the power of music to change us, I am naturally curious to know what are the forces that change music. 


Over the last decade, I have had an increasing number of conversations with people who sense we are living in a time of great change and upheaval technologically, socio-politically, and artistically. I want to delve into this by examining new technologies, interviewing experts, and asking questions about music’s past, present, and its possible futures.


An evolving Music Hackspace


Throughout this decade the Music Hackspace has been an anchoring presence in my life, offering learning, inspiration, and outlets for the technological side of my music. I remember its early days in Hoxton, London when a handful of people ranging from hip live coders and DJs to synth builders and Theremin enthusiasts would meet in the basement of Troyganic Cafe. It was hard to imagine this morphing into a glamorous residency at the elegant Somerset House Gallery in Central London, but it did, and in style. Now in its current incarnation, it is wonderful to see it open up to a truly global audience, having moved music – though certainly not all – of its activity online during the pandemic. 


My journey into coding


While some people come to Music Hackspace from a coding background moving towards music, my trajectory was the opposite. I studied music in a somewhat traditional manner before learning more about the technological possibilities of how to create it. I only truly learned Pure Data by working on The Cave of Sounds installation that the Music Hackspace helped fund and facilitate. It was a great opportunity in my life to learn not just from experts, but also from my peers, as the project involved solving hundreds – if not thousands – of small problems, to realise a bigger vision. 


A core group of eight people led by Tim Murray-Browne, we created an installation that exceed our own expectations, as it ended up touring the world, and is even currently being exhibited in Milan, Italy right now. It was a lesson in the power of teamwork, and what can happen when you combine skills to build something from a place where imagination takes precedence over experience. Some people in the group had virtually no musical experience, and others – like myself – had virtually no coding experience. 


The relation between technology and music


The relationship between technological development and musical progress is as old as time. Scales and chords are essentially algorithms. Cathedrals and churches reverb chambers. The piano is a revolutionary stringed percussion instrument with effect pedals. One can view church bell ringing and South Indian Carnatic music as early forms of generative music that combine algorithms and aesthetics to produce art. A question that might follow from this is, how is technology changing music now? 


Needless to say, AI represents a huge shift, but even before ChatGPT and Midjourney burst onto the scene, things were moving fast. The volatile world of NFTs and Cryptocurrencies attempted to change how art was funded and distributed. The Metaverse offered an alternative reality for artists to share their work. Yet humans are hard to predict, and hype doesn’t necessarily translate into lasting change. Many people’s priorities and beliefs changed during the pandemic, and technology should align with our better natures if it is to help improve the world. 


I look forward to exploring these technologies and topics in much greater detail and interviewing some of the world’s leading experts to find out what they think. 


Until then, if you would like to read other articles I have written you go to take a look at my Substack page by clicking here, and you can also listen to some of my music on my Bandcamp page by clicking here. You can also book a session with me through Music Hackspace by heading over to here.