A guide to seven powerful programs for music and visuals

Dom Aversano

What should I learn? A guide to seven powerful programs for music and visuals.

The British saxophonist Shabaka Hutchings described an approach to learning music that reduces it down to two tasks: the first is to know what to practise, and the second is to practise it. The same approach works for coding, and though it is a simple philosophy that does not necessarily make it easy. Knowing what to practise can feel daunting amid such a huge array of tools and approaches, making it all the more important to be clear about what you wish to learn so you can then devote yourself without doubt or distraction to the task of studying.

As ever the most important thing is not the tool but the skills, knowledge, and imagination of the person using it. However, nobody wants to attempt to hammer a nail into the wall with a screwdriver. Some programs are more suited to certain tasks than others, so it is important to have a sense of their strengths and weaknesses before taking serious steps into learning them.

What follows is a summary and description of some popular programs to help you navigate your way to what inspires you most, so you can learn with passion and energy.

Pure Data

Pure Data is an open-source programming language for audio and visual (GEM) coding that was developed by Miller Puckette in the mid-1990s. It is a dataflow language where objects are patched together using cords, in a manner appealing to those who like to conceptualise programs as a network of physical objects. 

Getting started in Pure Data is not especially difficult even without any programming experience, since it has good documentation and plenty of tutorials. You can build interesting and simple programs within days or weeks, and with experience, it is possible to build complex and professional programs.

The tactile and playful process of patching things together also represents a weakness of Pure Data, since once your programs become more advanced you need increasing numbers of patch cables, and dragging hundreds – or even thousands – of them from one place to another becomes monotonous work.

Cost: free

Introductory Tutorial 

Official Website

Max/MSP/Jitter and Max for Live

Max/MSP is Pure Data’s sibling, which makes it quite easy to migrate from one program to the other, but there are significant and important differences too. The graphical user interface (GUI) for Max is more refined and allows for organising patching chords in elegant ways that help mental clarity. With Max for Live you have Max built into Ableton – bringing together two powerful programs.

Max has a big community surrounding it in which you can find plenty of tutorials, Discord channels, and a vast library of instruments to pull apart. Just as Pure Data has GEM for visualisation Max has Jitter, in which you can create highly sophisticated visuals. All in all, this represents an incredibly powerful setup for music and visuals.

The potential downsides are that Max is paid, so if you’re on a small budget Pure Data might be better suited. It also suffers from the same patch cord fatigue as Pure Data, where you can end up attaching cords from one place to another in a repetitive manner.

Cost: $9.99 per month / $399 permanent licence or $250 for students and teachers

Introductory Tutorial

Official Website

SuperCollider

SuperCollider is an open-source language developed by James McCartney that was released in 1996, and a more traditional programming language than either Pure Data or Max. If you enjoy coding it is an immensely powerful tool where your imagination is the limit when it comes to sound design, since with as little as a single line of code you are capable of creating stunning musical outputs. 

However, SuperCollider is difficult, so if you have no programming experience expect to put in many hours before you feel comfortable. Its documentation is inconsistent and written in a way that sometimes assumes a high level of technical understanding. Thankfully, there is a generous and helpful online forum that is very welcoming to newcomers, so if you are determined to learn, do not be put off by the challenge.

An area that SuperCollider is lacking in comparison to Max and Pure Data is a sophisticated built-in environment for visuals, and although you can use it to create GUIs, they do not have the same elegance as in Max.

Cost: free

Introductory Tutorial 

Official website

TidalCycles

Though built from SuperCollider, TidalCycles is nevertheless much easier to learn. Designed for the creation of algorithmic music, it is popular in live coding or algorave music. The language is intuitive and uses music terminology in its syntax, giving people with an existing understanding of music an easy way into coding. There is a community built around it complete with Discord channels and an active community blog.

The downsides to TidalCycles are the installation is difficult, and it is a somewhat specialist tool that does not have as broad capabilities as the aforementioned programs.

Cost: free

Introductory Tutorial 

Official Websit

P5JS

P5JS is an open-source Javascript library that is a tool of choice for generative visual artists. The combination of a gentle learning curve and the ease of being able to run it straight from your browser makes it something easy to incorporate into one’s life, either as a simple tool for sketching out visual ideas or as something much more powerful that is capable of generating world-class works of art.

It is hard to mention P5JS without also mentioning Daniel Shiffmen, one of the most charismatic, humorous, and engaging programming teachers, who has rightly earned himself a reputation as such. He is the authour of a fascinating book called The Nature of Code which takes inspiration from natural systems, and like P5JS is open-source and freely available. 

Cost: free

Introductory Tutorial

Official Website

Tone.js

Like P5JS, Tone.js is also a Javascript library, and one that opens the door to a whole world of musical possibilities in the web browser. In the words of its creators it ‘offers common DAW (digital audio workstation) features like a global transport for synchronizing and scheduling events as well as prebuilt synths and effects’ while allowing for ‘high-performance building blocks to create your own synthesizers, effects, and complex control signals.’

Since it is web based one can get a feel for it by delving into some of the examples on offer

Cost: free

Introductory Tutorial

Official website

TouchDesigner

In TouchDesigner you can create magnificent live 3D visuals without the need for coding. Its visual modular environment allows you to patch together modules in intuitive and creative ways, and it is easy to input midi or OSC if you want to incorporate a new visual dimension to your music. To help learn there is an active forum, live meetups, and many tutorial videos on this site. While the initial stages of using TouchDesigner are not difficult, one can become virtuosic with the option of even writing your own code in the programming language Python. 

There is a showcase of work made using TouchDesigner on their website which gives you a sense of what it is capable of.

Cost: All features $2200 / pro version $600 / free for personal and non-commercial use. 

Introductory Tutorial

Official Website

‘Why I started Music Hackspace’: Jean-Baptiste Thiebaut

Dom Aversano

How I stumbled across Music Hackspace is a hazy memory. I remember turning up in a nondescript industrial estate in the hip part of East London, Hoxton. When I finally found the right door I walked into a room mid-presentation, temporarily averting the gaze of a dozen or so people who were casually sitting around listening in a state of deep concentration. They had the appearance of engineers, artists, academics, eccentrics, and hobbyists.

Afterwards, there was socialising, with beer and an English idea of what constitutes pizza. The studio had various bits of hardware scattered around and posters on the wall for the Anarchist Bookfair. I spotted one person who had set up a turntable that spun colourful bespoke mats whose patterns fed into a camera which turned it into sound. Its maker was a softly spoken, eloquent man with a French accent, dressed a bit more smartly than everyone else. It turned out, Jean-Baptiste Thiebaut, or JB, had started the group, and was knowledgeable about both music and technology. 

Jump forward a little more than a decade and the Music Hackspace has grown and gone through many changes. During that period I remember one meeting in the basement of Troyganic Cafe where about 3 people turned up, and I thought to myself ‘This is finished’. JB, maintained a more philosophical approach shrugging it off as a down phase, which was vindicated a couple of years later when the Music Hackspace found a home for itself in the prestigious Somerset House Gallery in central London – a short walk from Big Ben. 

It’s been more than a decade now that I have known JB, but in this time I realise I never knew what motivated him to start Music Hackspace, or the details of his background. So I thought an interview would be a good opportunity to delve into this.

Can you describe your background and what drew you to London?

I was born in Normandy, France, the son of a farmer. My father farmed cereals. He had fields of wheat, peas, and barley, and later started his own brewery. I got into tech, engineering, and music, and became passionate about research. I went to French conferences on the topic but I felt that the world was bigger and started following international conferences. 

I needed to speak English and be within an Anglo-Saxon community. London was close, and I had funding for research. I started in 2005 at the Center for Digital Music at Queen Mary, and I stayed in London after that, working as a software developer at Focusrite. I never returned to France, I love London! In my research centre, my colleagues were from all over the world and I loved that diversity. 

There were a lot of people who also came from small villages in their countries and wanted to see the world, and wanted to be where it’s at. The thought of returning to a society more centred around its own culture and less towards a global culture did not appeal. I wanted diversity. 

What inspired you to create Music Hackspace? 

It was 2011 and I was fresh off my PhD at Queen Mary, but I didn’t really know what to do. I was full of ideas, and I had recently become the innovation manager for Focusrite. A lot of new things were happening at that time in the music manufacturing industry: Ableton had released Push, Native Instrument Maschine, synthesizers didn’t cost the price of a house like they did 30 years before, and music software was booming. It was an interesting time to think about new products. 

I ventured into the basement of Focusrite one day, and noticed that a lot of prototyping equipment was going to scrap: PCBs of synthesisers or prototypes of Launchpad that would never be used. So I went to my manager and asked permission to repurpose it.

I was a member of the London Hackspace at the time, and I sent a message asking if someone would be interested in tinkering with those bits of equipment going to scrap. Two people responded, Martin Klang and Philip Klevberger. They came, we filled up the trunk and we said: “OK, let’s meet next Tuesday in Hoxton at the London Hackspace and invite anyone who wants to have a go, and we will just hack for fun that evening”. So we did. And the music Hackspace was born there and then!

How was the evening? 

Twenty people showed up from all walks of life: musicians wanting to create things to support their career and their artistic vision, engineers working in finance or legal firms – but musicians as well – who wanted their skills as engineers to be used for the arts. So you had these two groups that wanted something and they could achieve their goals by collaborating. 

So I had this eureka moment thinking ‘this is great, I’m also myself on both sides because I’m trained as a composer and trained as an engineer’. You can work on your art or you can work on building tools for artists and bringing the two together was my goal. I felt I found a kind of home. So I decided to honour the fact that we came from the London Hackspace and called it Music Hackspace. 

How did things progress from there?

Focusrite liked what I was doing and was very supportive, giving me the afternoon off to travel to London and a small budget to buy pizzas for everybody. Our meetings grew in popularity and eventually, we had to stop meeting at the London Hackspace because we were making too much noise. Once a week we would take over the Hackspace with a presentation and Q&A with researchers, artists and engineers. When we had to move, Martin Klang – who was doing all kinds of interesting things like building open-source effect pedals – invited us to his studio which was next door, and was big enough to host about 60 people, and that lasted until he moved out.

The Music Hackspace was initially motivated by my curiosity about innovation in music, which I think stemmed from my education and the work I was doing at Focusrite. It was not meant as a company or anything. It was a chance encounter between passionate people, on the lookout for new ways of being expressive; new ways of merging tech and art. 

I’m curious that you were a member of the London Hackspace, what drew you to that? Did hacking culture appeal to you? 

Yes, the hacking and DIY culture was very appealing to me. I finished my PhD with a lot of theoretical knowledge but not much practical knowledge. I wanted to tinker, and the London Hackspace was a very welcoming place with a lot of equipment and all sorts of fascinating, exuberant folks with wacky ideas and tremendous knowledge of electronics. I had at this point no experience whatsoever in DIY electronics, but I got into it. Arduino was just starting, and had a lot of hype. I found it fascinating that you could build your own embedded computer and augment instruments with a portable microchip that could analyse signals and embed intelligence into instruments. 

I found the values appealing too, and it was important to keep them as the community developed. The Music Hackspace was this free space that Martin and I hosted every Thursday night,  to exchange ideas, get inspired and collaborate very freely.  We had Max meetups where people would come and help each other. Someone would show a project on their screen and say, ‘Hey, this is what I’m working on’. This is my problem’. Other people would say, ‘Oh, here’s an idea that might help’.. 

Members were naturally collaborating over the years, with a few Kickstarter projects coming from the members. The Hoxton Owl guitar pedal, Touch Keys and then Bela all involved members of the Music Hackspace. Their Kickstarter videos were all filmed by Susanna Garcia, who was a director of Music Hackspace from 2014 to 2019, and runs her own film company, Mind The Film. Slowly our network grew so that when artists were visiting London from abroad, we would ask them to come and talk about their work. Over the years, we ran over 800 events, and many of our members and speakers went on to build great careers in the music industry, as researchers, entrepreneurs, artists or developers. Tadeo Sendon was also a co-director and played a major role during this time, leading the curation of sound art events at Somerset House, securing our first grants, and building connections in London’s artistic community. 

In 2020, Music Hackspace started to teach online courses, how did that happen?

In 2019, I decided to commit to the Music Hackspace full-time, and turn what was a hobby into a business. I had 10 years of experience working for various music companies then, and I wanted to channel all that experience into developing the community. I had a business plan ready for us to have our own space and run events, but as COVID happened, those plans went out the window! The only way we could run any event was to host them online, and we started doing that. 

I had just finished working at Cycling ’74 then, and Darwin Grosse agreed to sponsor Max meetups and free sessions to teach Max to beginners. That was a huge boost for our online courses because we didn’t have much of an audience outside of London, let alone the UK. Later that year, TouchDesigner also offered to sponsor meetups and courses, and more partners followed during COVID. 

I interviewed the composer and programmer Robert Thomas here recently who sees music as moving away from traditional fixed recording, and towards what he describes as a more liquid existence facilitated by software, where music can do all the things software can do. I’m curious to what degree you share that vision. 

The history of the evolution of music was part of my research thesis, in particular retracing the convergence of technology with the complexity of music. There is a direct correlation between the complexity of the tools we use and the complexity of music. Notation was designed in the 9th century to record a single melody so that it could be fixed, transmitted and archived. It was simple, just one melody line with a rudimentary notion of rhythm. And then in the 12th century, it started to become more complex, with polyphony. Then the printing press arrives, and that changes everything. Suddenly scores are everywhere, and people sell the scores, and their ubiquity allows more people to play music, and for music to be shared across countries. The printing press was a massive boost for the dissemination of music. 

Fast forward to the 21st Century and computers are now part of every aspect of the music creation process, for art gallery installations, live concerts and most music experiences. As to whether generative music, experientially, is changing music is the future? Yes, but I think it’s one of its many futures. Music that evolves based on your breathing, your surroundings, the time of the day, and other factors has definitely a place in this world! 

Kaija Saariaho’s lasting legacy on electronic music

Dom Aversano

© Kaija Saariaho Photo by Christophe Abramowitz courtesy of Saariaho70.fi

The recent death of the Finnish composer Kaija Saariaho represents a great shock and loss for music, as she was so greatly admired for her pioneering spirit and irreplaceably original voice. In 2019 leading composers were polled by BBC Music Magazine to rate composers, with Saariaho emerging as the world’s best living composer. Her diverse repertoire of music touched upon many fields of music making, not least of all electronic music and computer-assisted composing. 

Despite having admired the music of Saariaho for many years it was only in 2021, two years prior to her death, that I had the opportunity to hear her music performed live for the first time. Though I had previously enjoyed listening to recordings of her orchestral compositions, I had a sense this was music that demanded a live setting to bring it truly to life. 

A ripple of excitement travelled through Valencia’s orchestral members at the prospect of playing this music, given they do not often have the opportunity to perform pieces by living composers. Despite the pandemic, as well as the repeated refrain that contemporary music just doesn’t fill concert halls, two-thirds of the seats were filled with mask-wearing audience members. 

The music took on a new life when performed. Far from difficult, it felt enticing and mesmeric, constructed from a sonic language whose subtle logic could be learned in an autodidactic manner, through osmosis and exposure. That evening, I left the concert hall with a strong sense I had only dipped my toe in the music, and that it deserved and required an entire festival. 

If one agrees with the view that her instrumental music thrives in a concert hall, this is not necessarily the case for her electronic music, which can be fully enjoyed with decent pair of headphones or speakers. While I do not pretend to be an expert on Saariaho’s music I have revisited some of her compositions since her death, focusing on a formative time when she had relocated to France from her native Finland and was working at the influential Parisian Centre for Research IRCAM (Institut de Recherche et Coordination Acoustique/Musique)I will share my thoughts and reflections about three of Saariaho’s compositions from this period in the 1980s.  

1. Vers le blanc (1982)

As far as enigmatic music goes this composition is right up there as no complete recording of the music has ever been published, with it only having been performed at a select number of concerts. However, the minute score below gives us some conception of the music. 

The score describes a transformation from one tone cluster to another (ABC -> DEF) over a gradual glissando (a glide from one pitch to another) that lasts an unusual fifteen minutes. The influence of French Spectralist music on a younger Saariaho is obvious, but to my mind, she taps into a broader zeitgeist. There is a similarity in the combination of musical simplicity and conceptual radicalism to John Cage’s 1952 composition 4′ 33″ and Steve Reich’s 1965 tape piece It’s Gonna Rain. These compositions are not simply sound worlds (or absences) to be enjoyed, but philosophical questions about the nature and direction of music. 

Despite the fact that no complete recording of the composition exists, in 2017 Landon Morrison, College Fellow in Music Theory at Harvard University, visited IRCAM in France and discovered some original audio of this composition, of which three excerpts can be heard here. Saariaho chose to use synthetic human voices, and is quoted as having said of this piece.

“(it) create(s) the illusion of an endless human voice, sustained and ‘non-breathing,’ which at times departs from its physical model”

I still wanted to hear some approximation of the entirety of the compositions, so could not resist programming something in Supercollider. While the code below is in no sense an accurate representation of Saariaho’s work, not least of all since its timbre is made from sine waves rather than synthesised voices, it does give some sense of the gradual shift that occurs within her composition.

Due to not wishing to infringe on copyright or create an inaccurate recording I am sharing this as code which can be run in SuperCollider.

				
					var clusterStart = [48, 57, 59].midicps; var clusterEnd = [ 52, 50, 53].midicps; 
var duration = 60*30;
{Pan2.ar(
	SinOsc.ar(
		Line.ar(clusterStart, clusterEnd, duration)
	).sum * 0.3,
	2.0.rand-1;)
}.play;
				
			

2. Lichtbogen (1985/86) 

While this composition does not use electronic sounds, it is composed with the help of computers. Saariaho manages the impressive feat of bringing the often seemingly disparate worlds of computers and nature into harmony. Of the name of the composition she wrote.
‘stems from Northern Lights which I saw in the Arctic sky when starting to work on this piece’.
The sense of Finland’s deep nature combines itself with the exploratory intellectualism of Paris’s IRCAM, where computer music was researched and developed. She describes using two systems for harmony and rhythm, the FORMES system and the CRIME system. Saariaho describes how computers assisted her compositions. ‘These programmes allowed me to construct interpolations and transitions for different musical parameters… The calculated results have then been transcribed with approximations, which allows them to be playable to music notation.’

3. Stilleben (1987/88)  

I have listened to this composition many times and feel it is simultaneously direct and unknowable. The directness is in its autobiographical nature, describing a person living away from their native country, surrounded by three European languages: French, German, and Finnish, all of which had a personal significance to Saariaho. Similarly, the sounds of trains symbolise the cosmopolitan life of an internationally successful composer. It taps into a larger tradition of the use of trains in music, ranging from jazz composer Duke Ellington’s Take the A Train, to Brazilian composer Heitor Villa Lobos’s The Little Train of the Caipira, as well as New York composer Steve Reich’s Different Trains, which was written in the same year. 

 

The unknowable aspect of the music is the manner in which it has been arranged. It feels somewhat dreamlike, with its radiophonic nature lending it a cinematic element. Yet the recurrence of strings throughout appears to root the piece in the concert tradition. The recent rise of nationalism across Europe makes the piece feel almost controversial and political, as though it were a defence of internationalism, but I have seen no evidence of that being its original intention. Regardless of one’s interpretation, there is an expertise and maturity at work in the piece that is inspiring and beautiful, demonstrative of someone in possession of immense technical and artistic ability. 

On the future of music: an interview with composer Robert Thomas (part 2)

Dom Aversano

Dom Aversano

This is part 2 of an interview with composer Robert Thomas. The first part you can read here.

Q. I associate you with Pure Data. Is it still your primary tool? If so to what extent do you think tools shape one's work?

I use Pure Data a lot because it's a universally deployable tool, and you can make installations and all sorts of bespoke things with it. It can be used in apps, game engines, or the web. Also, as it's open source, it doesn't have any proprietary licences associated with it.

Everything I work with is either my own library, which I licence to creators, or it's open source, so BSD licensed. It's easy to work with it from a business perspective and it's well-supported and incredibly stable. From a creative perspective, I use it because it's real-time. I think to do creative things musically and sonically you have to be working in a real-time environment, not a compiled environment. It's always good to be open to happy accidents, which when you're working in real-time can happen, but when you're not it is less likely. So I don't like compiling when I'm working.

And then, what was the second part of your question?

Q. The extent to which the tool may or may not be shaping your work.

I think in some ways it doesn't shape it at all because PD is just Digital Signal Processing and you can do anything you want with it. When you make a new patch it is really just a white page - it's very open, flexible, and at the same time, terrifyingly, overwhelmingly, even dangerously open-ended.

Therefore I've developed a good 'muscle of restraint' to control exactly what I want to do. You don't want to go down all kinds of undefined meanderings in PD. It’s not the place to do that. It can be interesting to try new things and be open to accidents but there's a balance, you need a strong idea before you start coding because the program is so open.

The constraints I was talking about earlier were not constraints about what can be done with the software, they are about what is possible with the wider technology. Some aspects of personalisation can be very difficult to know, such as with contextual and emotional detection, or biometrics. There are limits to what we can and can't reliably understand, which provide creative constraints and require you to work within a framework that is sometimes relatively simple.

A good example would be when you are working with an accelerometer to understand how someone is moving, or with a GPS to work out how fast someone is going. There is only a certain amount of fidelity you can get from that. There are practical considerations, like if you have the GPS whacked way up in accuracy on a device it's going to drain the battery, and the user is quickly going to get really annoyed with the experience.

So you need to say, 'Well, OK, we're going to make a judgement that it is OK after this amount of movement, or we're going to look at the GPS over this amount of time and decide, when we think they are really moving, which will mean there's going to be a sudden change in the state of the user. What could we do with an accelerometer? We can look at how it is changing over time and try to use step detection if they are walking. There is a lot of work in getting such algorithms accurate, which places a boundary around how you creatively respond. I think that is what shapes what you do creatively.

A lot of the time what I am doing inside of DSP is relatively simple, and I try to make it as elegantly simple as I can, from the perspective of stability and reliability but also CPU and memory usage. The most desirable systems are actually the most simple and elegant. Those are the golden systems.

Another issue with tools which happens in DAWs, but especially inside of programming, is there is satisfaction in solving complex tasks or challenges in clever ways. It's very dangerous thing to get sucked into if it takes you away from creating a good musical experience, and one major problem I see in the space with a lot of projects – and I've been sucked into this on some projects as well – is that you try and create something that's a really clever system, but it sounds crap. Compared to a studio production where someone is working with off-the-shelf tools and a DAW using amazing plugins and rendering it down loads of times with intricate and polished production and writing. That's what we have to compete with. We have to be at the same level as that and better in real-time.

So if a system is super complex and really rewarding as a programming project but sounds crap, that's no good, because music lives or dies on emotional experience. If people don't enjoy it as a musical experience, it doesn't matter how clever it is. I see that as the biggest danger in this space.

Q. That's one risk I see with generative music. Algorithms are generating what is being heard, which is different from a live performer where it comes directly from them. With generative music there is the intermediary of the algorithm, and a risk of things sounding hollow and dehumanised. How does one get that deep emotional experience into the work?

Well, I think that's the art of creating algorithms from an artistic and humanist perspective, which has nothing to do with what is happening in machine-learning music at the moment. It is absolutely the opposite. I find it frustrating that the term generative AI has been co-opted by the machine-learning community because the approach to generative music that Brian Eno, Autechre, and I take is to human craft algorithms. It is the polar opposite of throwing everything into a massive deep-learning network and never knowing what is happening inside it, which is what deep-learning language models do. This space is about carefully crafting algorithms to embody as much of yourself and human expression as possible. That is what I am about.

I've heard Brian Eno talking about Steve Reich's influence on him and how he crafted the music through systems. Reich was very specific about it, which made this very interesting possibility of outputs as a generative system. So it is about the seeds and the rules.

When you're crafting things you need to listen to them for enormous amounts of time to hear all these different states, making sure it has an emotional and artistic impact. I think where things go wrong is when you try to either make generic algorithms that will make generic hip-hop, EDM, or ambient music, or even worse, a rule-based system that can make all kinds of different music. When you are that broad there is never going to be any specific quality to it. The worst is when you give up all control and completely entrust it to a network inside the system, such as deep-learning and large-language models where nobody understands what is inside the system. We are trying to make systems to understand what is inside them! How can that be an artistic endeavour? An artistic endeavour is a process of trying things and learning them. If systems are impenetrable I think it's very challenging to have an artistic interaction with them. I believe there are ways that machine learning will be helpful, but a fully automated, unsupervised, completely autonomous system is not particularly creative.

With your question, I think it is the important thing. We need to incorporate many aspects of what we do into the system; things we all do. When I do workshops with musicians I ask, when you are playing what are you doing? Okay, you're doing these types of patterns rhythmically. Oh, you're doing these kinds of intervals. You're doing these types of phrasings. You're always swinging in this way. You're like, 'I'm not, not all of the time', but when you're doing that musical thing, that idea, what are you doing?

These are the things we need the algorithm to do: to distil down a process which is both the artist and an extension of them. It embodies many aspects of the artist, but it can do things that no artist can ever do: create live music for 1000 people all over the world all at once, which is different for each person. Those are the possibilities I'm interested in.

Here are some links if you are interested to know more about Robert Thomas’s work

http://robertthomassound.com

https://www.instagram.com/robertthomascomposer/

 

On the future of music: an interview with composer Robert Thomas (Part 1)​

Dom Aversano

Five years ago I attended an event at South London’s experimental venue the Iklectik Art Lab. The night was organised by Hackoustic, a group of music hackers who use acoustic objects in their work and organise events for artists to make presentations and share ideas.

The headline speaker that night was the composer and audio programmer Robert Thomas. Despite him having worked with the likes of Hans Zimmer, Massive Attack, and the Los Angeles Philharmonic Orchestra, it was my first time encountering his work. I found the presentation refreshing and original, as he expounded a unique take on a potential future of non-linear, non-deterministic, and more responsive and dynamic music.

I took no notes during the presentation, and later when I tried to search for a good outline of Robert’s thinking I couldn’t find anything that clearly outlined his thinking. So I was delighted by the opportunity to interview Robert for Music Hackspace.

In this talk, we discuss the idea that digital music, rather than being represented as ‘frozen’ recordings, could potentially be expressed better through more ‘liquid’ and dynamic algorithms. What follows is a lightly edited transcript of the first part of our conversation.

Q. You have an interesting general philosophy about musical history, could you describe it?

We are often used to thinking about music in a particular way, as a fixed form, but it does not need to be the case. By a fixed form I mean having a definitive version, like an official recording of a song. Music has only been a fixed medium for a very short period.

Thousands of years ago when prehistoric humans sang to each other music was this completely ephemeral, fluid, liquid-like thing that flowed between people. One person would have an idea, they would sing it to another, it would change slightly, and as it flowed around society it evolved.

Of course, all improvised music still does that to an extent, but over the years we started to get more adept at capturing music. First, there were markings of some kind which eventually turned into notation, and over time we formalised things and built lots of standards around our music. Only very recently, in this sort of blip, in the 100 or so years, we've thought about capturing audio from the environment by recording it and thinking about these recordings as definitive. I think this way of looking at music history is interesting because recordings just happened recently, in the last 100 or 150 years.

What is interesting now is that we can go beyond recordings and are able to do loads of really exciting and different things with music. What is frustrating is that many of the ways we create, distribute, and experience music are not taking advantage of it. If you look at the ways we capture musical ideas, such as recordings, how we work with them has not changed much since the wax cylinder: something is moving through the air, you capture it in some way, and turn it into a physical or conceptual object. The physical object might be a wax cylinder, a vinyl, or a CD, and the conceptual object a digital file, an MP3, WAV etc. All of those things are effectively the same: an unchangeable piece of audio that has a start, middle, and end.

Certain things have changed over the years, but even though we have gone into the digital realm, huge conceptual changes have not really come about. A lot of my work is about saying, well, once you go into the realm of software, actually this huge expansion of possibilities happens. You can think of the piece of music as software, which opens up a whole new world of opportunities – many of the projects I've been involved in try to take advantage of this.

It can be helpful to think from a perspective that says, ‘Well, as software things could change for each person’: it could be different at different times of the day, change based on your surroundings, the weather, the phase of the moon, how much you are moving, if it’s noisy where you are listening, change based on your driving, what country you are in, your heart rate or brain waves. I have explored all these ideas in my projects. In some ways it’s like how games use music, but in real life. By looking at how we use software we can think wider and consider, well, music could do those things too - could it be a virus for instance? It is quite an interesting thought exercise.

There are not many people exploring this; it’s a relatively small niche. Of course, some are looking at generative music more widely. Brian Eno also uses this fluid analogy, and there have been many different explorations of algorithmic music of various types. There has been a little bit of a recent surge around these ideas with Web3 and NFTs, although I think there are a lot of ethical issues with that technology.

Q. A few years ago people thought music was going to move towards becoming apps on phones. I know that you've worked on that with projects like RJDJ and the app you made for the film Inception, and people like Bjork have too. However, we are not at a point where there is mass adoption of these technologies, and therefore, from your perspective, could you say that Spotify is like the wax cylinder, but with a different distribution method?

Spotify or digital streaming more generally, does things that are different, but above the scale of the music itself. So they never go down into the song or the track. They stay at the level of the playlist, recommendation, or feed. That level of personalisation.

The wider media platforms which host film, TV, podcasts, and audiobooks have also changed, mainly through adopting newsfeed and personalization algorithms. I think these create enormous problems, which are not entirely disassociated from the much bigger problems in social media and the internet in general, although that is a much bigger subject. Overall, I think that is where change has happened, but I don't think it is positive.

These changes killed the album. TikTok, for instance, is going further and saying, it doesn't even matter what is in the rest of the song, as long as there's this little fragment that will be catchy as a meme in a 15-second video. One of the most common barriers when trying to innovate in the music industry is the challenge of dealing with inertia, and a lack of willingness for genuine fundamental change.

Q. Let’s discuss fundamental change. Let's say we looked into the actual composition. For instance, how the composition is created, so not as recorded from this point to this point, but as something generative. Could you envision it being distributed on a mass scale, where everyday people felt that it was relevant? Do you see that coming?

I wouldn't say I see it coming, but that doesn't mean that it is not possible. The reason is that people in the industry don't necessarily want it to happen, or understand how it could happen. Also, I think listeners generally don’t know about generative music, but when they do they engage a lot with it.

Technologically, there's no reason why fundamental change should not happen now because an app can be anything. The Spotify app just connects to servers and pulls down chunks of an audio file, puts them back together again, and plays them to you. A more innovative type of app, like Fantom, also pulls down chunks of audio, but it puts it together with algorithms and makes it react and adapt to aspects of your life. It's just a different technology. There are many projects that are exploring these things with varying degrees of success.

Q. Could you provide some examples, your work included, that you find innovative?

Yeah, so I would say the more innovative projects that have happened outside of conventional streaming are works like Bjork’s Biophilia app-album, and sound track-to-your-life type projects like Inception The App, the RjDj apps, and the collaborations I've done with Massive Attack for the various Fantom apps. Radiohead did some interesting projects like Universal Everything. Lifescore also makes adaptive soundtracks for your life.

Then you have what is outside of strictly entertainment, like functional music and health applications. I've done projects there with Biobeats and Mindsong, which react to EEG signals from meditation. I'm also working with a company called Wavepaths, which makes adaptive and generative music for mental health therapy with psychedelics. Then you have the many different facets of wellness and functional music, including companies like Endel, who create functional, generative, personalised music.

Q. What are the differences in making installations versus apps?

The biggest difference is, when you make an installation you control the experience completely. For instance, during the Forest for Change project, I did at Somerset House recently with Es Devlin and Brian Eno, I had a lot of precise control. As a creator you are there, you hear what the person will hear and what the speakers are like, you know the technology and do not have to build it for distribution. When you see people using it you know if they're getting confused or whether they understand the interaction. When you do an installation you have control, similar to a live show.

When you make a distributed experience, especially apps and games, you may not know exactly what the player or person is doing, if they are confused, what state they are in – all of these different things. That's the biggest difference. So it is much more ambitious to make distributed things, but I find it more exciting. When we were working on Inception The App, we got these amazing emails from people telling us about how it created the perfect soundtrack for their life. For instance, when they were skiing down mountains with the music dynamically changing.

For me, those are the really amazing projects. I remember when I used to listen to an old-school iPod shuffle, and it would just happen to play the perfect music as I started to go for a run, which seemed to be the soundtrack for that moment. Lots of the projects I have been involved with are about trying to make that happen, but by intent, and controlling it artistically.

When you hear from someone for whom that happened that's amazing, as they have not gone to an installation where everything is controlled and they have expectations, but instead, it happened in their everyday life. It’s a much more personal interaction in people's lives. Those are the most exciting things, but they are harder and way more ambitious.

Q. Yet, you create new ways for people to experience music.

It is working in such a way that you go off the rails of what’s a ‘normal’ musical experience. Instead of staying on previously laid rail tracks where you can only go where someone has gone before, I throw down the tracks in front of me as I go. It can get a bit intense!

David Bowie said that you need to be a bit out of your depth to be doing something good. It is then that you know you are probably doing something good, or at least interesting. I think the balance is to never be so ambitious that you can’t maintain musicality. Bowie was completely right in that you need to go beyond where you're comfortable. You have to be slightly uncomfortable in the creative process to do something good, and I think he did that at a number of points in his life in various ways, and not just with technology. He completely anticipated many issues around the devaluation of music.

I think it's a privilege to be working in this area because you're seeing the edge of where we are. There will always be challenges and constraints in what can and can't be done, but constraints are what make good creativity.

A lot of the problem with the music-making process at the moment is that we have too many technological choices. You can make a track in a normal DAW with loads of plugins that you could use in many different ways, and then freeze them and turn them into audio and use more plugins on that, and then mix them. The possibilities become overwhelming.

So with all these technological options people often say, ‘OK, well I'm going to limit my creative possibilities artificially’. Artificially bring them down. What I do – which I think is different – is I go to a place artistically and conceptually where it is already very hard to achieve my ideas, so I don't have the freedom to limit myself. I move my creative, conceptual aspirations into a space which is constrained creatively because it's innovative, which I think is a much healthier thing to do than imposing arbitrary, artificial constraints. Although the hard thing is it means you need to become technically aware in order to do it.

The second part of this interview can be found here.

Welcoming writer Dom Aversano: exploring the interaction between technology, music, and globalisation.

Dom Aversano

 

I would like to briefly introduce myself as I will be creating a series of blog posts for Music Hackspace. I am a composer, percussionist, and writer with a particular interest in how globalisation and technology influence music. As I am convinced of the power of music to change us, I am naturally curious to know what are the forces that change music. 

 

Over the last decade, I have had an increasing number of conversations with people who sense we are living in a time of great change and upheaval technologically, socio-politically, and artistically. I want to delve into this by examining new technologies, interviewing experts, and asking questions about music’s past, present, and its possible futures.

 

An evolving Music Hackspace

 

Throughout this decade the Music Hackspace has been an anchoring presence in my life, offering learning, inspiration, and outlets for the technological side of my music. I remember its early days in Hoxton, London when a handful of people ranging from hip live coders and DJs to synth builders and Theremin enthusiasts would meet in the basement of Troyganic Cafe. It was hard to imagine this morphing into a glamorous residency at the elegant Somerset House Gallery in Central London, but it did, and in style. Now in its current incarnation, it is wonderful to see it open up to a truly global audience, having moved music – though certainly not all – of its activity online during the pandemic. 

 

My journey into coding

 

While some people come to Music Hackspace from a coding background moving towards music, my trajectory was the opposite. I studied music in a somewhat traditional manner before learning more about the technological possibilities of how to create it. I only truly learned Pure Data by working on The Cave of Sounds installation that the Music Hackspace helped fund and facilitate. It was a great opportunity in my life to learn not just from experts, but also from my peers, as the project involved solving hundreds – if not thousands – of small problems, to realise a bigger vision. 

 

A core group of eight people led by Tim Murray-Browne, we created an installation that exceed our own expectations, as it ended up touring the world, and is even currently being exhibited in Milan, Italy right now. It was a lesson in the power of teamwork, and what can happen when you combine skills to build something from a place where imagination takes precedence over experience. Some people in the group had virtually no musical experience, and others – like myself – had virtually no coding experience. 

 

The relation between technology and music

 

The relationship between technological development and musical progress is as old as time. Scales and chords are essentially algorithms. Cathedrals and churches reverb chambers. The piano is a revolutionary stringed percussion instrument with effect pedals. One can view church bell ringing and South Indian Carnatic music as early forms of generative music that combine algorithms and aesthetics to produce art. A question that might follow from this is, how is technology changing music now? 

 

Needless to say, AI represents a huge shift, but even before ChatGPT and Midjourney burst onto the scene, things were moving fast. The volatile world of NFTs and Cryptocurrencies attempted to change how art was funded and distributed. The Metaverse offered an alternative reality for artists to share their work. Yet humans are hard to predict, and hype doesn’t necessarily translate into lasting change. Many people’s priorities and beliefs changed during the pandemic, and technology should align with our better natures if it is to help improve the world. 

 

I look forward to exploring these technologies and topics in much greater detail and interviewing some of the world’s leading experts to find out what they think. 

 

Until then, if you would like to read other articles I have written you go to take a look at my Substack page by clicking here, and you can also listen to some of my music on my Bandcamp page by clicking here. You can also book a session with me through Music Hackspace by heading over to here.

About
Privacy