On the future of music: an interview with composer Robert Thomas (Part 1)​

Dom Aversano

Five years ago I attended an event at South London’s experimental venue the Iklectik Art Lab. The night was organised by Hackoustic, a group of music hackers who use acoustic objects in their work and organise events for artists to make presentations and share ideas.

The headline speaker that night was the composer and audio programmer Robert Thomas. Despite him having worked with the likes of Hans Zimmer, Massive Attack, and the Los Angeles Philharmonic Orchestra, it was my first time encountering his work. I found the presentation refreshing and original, as he expounded a unique take on a potential future of non-linear, non-deterministic, and more responsive and dynamic music.

I took no notes during the presentation, and later when I tried to search for a good outline of Robert’s thinking I couldn’t find anything that clearly outlined his thinking. So I was delighted by the opportunity to interview Robert for Music Hackspace.

In this talk, we discuss the idea that digital music, rather than being represented as ‘frozen’ recordings, could potentially be expressed better through more ‘liquid’ and dynamic algorithms. What follows is a lightly edited transcript of the first part of our conversation.

Q. You have an interesting general philosophy about musical history, could you describe it?

We are often used to thinking about music in a particular way, as a fixed form, but it does not need to be the case. By a fixed form I mean having a definitive version, like an official recording of a song. Music has only been a fixed medium for a very short period.

Thousands of years ago when prehistoric humans sang to each other music was this completely ephemeral, fluid, liquid-like thing that flowed between people. One person would have an idea, they would sing it to another, it would change slightly, and as it flowed around society it evolved.

Of course, all improvised music still does that to an extent, but over the years we started to get more adept at capturing music. First, there were markings of some kind which eventually turned into notation, and over time we formalised things and built lots of standards around our music. Only very recently, in this sort of blip, in the 100 or so years, we've thought about capturing audio from the environment by recording it and thinking about these recordings as definitive. I think this way of looking at music history is interesting because recordings just happened recently, in the last 100 or 150 years.

What is interesting now is that we can go beyond recordings and are able to do loads of really exciting and different things with music. What is frustrating is that many of the ways we create, distribute, and experience music are not taking advantage of it. If you look at the ways we capture musical ideas, such as recordings, how we work with them has not changed much since the wax cylinder: something is moving through the air, you capture it in some way, and turn it into a physical or conceptual object. The physical object might be a wax cylinder, a vinyl, or a CD, and the conceptual object a digital file, an MP3, WAV etc. All of those things are effectively the same: an unchangeable piece of audio that has a start, middle, and end.

Certain things have changed over the years, but even though we have gone into the digital realm, huge conceptual changes have not really come about. A lot of my work is about saying, well, once you go into the realm of software, actually this huge expansion of possibilities happens. You can think of the piece of music as software, which opens up a whole new world of opportunities – many of the projects I've been involved in try to take advantage of this.

It can be helpful to think from a perspective that says, ‘Well, as software things could change for each person’: it could be different at different times of the day, change based on your surroundings, the weather, the phase of the moon, how much you are moving, if it’s noisy where you are listening, change based on your driving, what country you are in, your heart rate or brain waves. I have explored all these ideas in my projects. In some ways it’s like how games use music, but in real life. By looking at how we use software we can think wider and consider, well, music could do those things too - could it be a virus for instance? It is quite an interesting thought exercise.

There are not many people exploring this; it’s a relatively small niche. Of course, some are looking at generative music more widely. Brian Eno also uses this fluid analogy, and there have been many different explorations of algorithmic music of various types. There has been a little bit of a recent surge around these ideas with Web3 and NFTs, although I think there are a lot of ethical issues with that technology.

Q. A few years ago people thought music was going to move towards becoming apps on phones. I know that you've worked on that with projects like RJDJ and the app you made for the film Inception, and people like Bjork have too. However, we are not at a point where there is mass adoption of these technologies, and therefore, from your perspective, could you say that Spotify is like the wax cylinder, but with a different distribution method?

Spotify or digital streaming more generally, does things that are different, but above the scale of the music itself. So they never go down into the song or the track. They stay at the level of the playlist, recommendation, or feed. That level of personalisation.

The wider media platforms which host film, TV, podcasts, and audiobooks have also changed, mainly through adopting newsfeed and personalization algorithms. I think these create enormous problems, which are not entirely disassociated from the much bigger problems in social media and the internet in general, although that is a much bigger subject. Overall, I think that is where change has happened, but I don't think it is positive.

These changes killed the album. TikTok, for instance, is going further and saying, it doesn't even matter what is in the rest of the song, as long as there's this little fragment that will be catchy as a meme in a 15-second video. One of the most common barriers when trying to innovate in the music industry is the challenge of dealing with inertia, and a lack of willingness for genuine fundamental change.

Q. Let’s discuss fundamental change. Let's say we looked into the actual composition. For instance, how the composition is created, so not as recorded from this point to this point, but as something generative. Could you envision it being distributed on a mass scale, where everyday people felt that it was relevant? Do you see that coming?

I wouldn't say I see it coming, but that doesn't mean that it is not possible. The reason is that people in the industry don't necessarily want it to happen, or understand how it could happen. Also, I think listeners generally don’t know about generative music, but when they do they engage a lot with it.

Technologically, there's no reason why fundamental change should not happen now because an app can be anything. The Spotify app just connects to servers and pulls down chunks of an audio file, puts them back together again, and plays them to you. A more innovative type of app, like Fantom, also pulls down chunks of audio, but it puts it together with algorithms and makes it react and adapt to aspects of your life. It's just a different technology. There are many projects that are exploring these things with varying degrees of success.

Q. Could you provide some examples, your work included, that you find innovative?

Yeah, so I would say the more innovative projects that have happened outside of conventional streaming are works like Bjork’s Biophilia app-album, and sound track-to-your-life type projects like Inception The App, the RjDj apps, and the collaborations I've done with Massive Attack for the various Fantom apps. Radiohead did some interesting projects like Universal Everything. Lifescore also makes adaptive soundtracks for your life.

Then you have what is outside of strictly entertainment, like functional music and health applications. I've done projects there with Biobeats and Mindsong, which react to EEG signals from meditation. I'm also working with a company called Wavepaths, which makes adaptive and generative music for mental health therapy with psychedelics. Then you have the many different facets of wellness and functional music, including companies like Endel, who create functional, generative, personalised music.

Q. What are the differences in making installations versus apps?

The biggest difference is, when you make an installation you control the experience completely. For instance, during the Forest for Change project, I did at Somerset House recently with Es Devlin and Brian Eno, I had a lot of precise control. As a creator you are there, you hear what the person will hear and what the speakers are like, you know the technology and do not have to build it for distribution. When you see people using it you know if they're getting confused or whether they understand the interaction. When you do an installation you have control, similar to a live show.

When you make a distributed experience, especially apps and games, you may not know exactly what the player or person is doing, if they are confused, what state they are in – all of these different things. That's the biggest difference. So it is much more ambitious to make distributed things, but I find it more exciting. When we were working on Inception The App, we got these amazing emails from people telling us about how it created the perfect soundtrack for their life. For instance, when they were skiing down mountains with the music dynamically changing.

For me, those are the really amazing projects. I remember when I used to listen to an old-school iPod shuffle, and it would just happen to play the perfect music as I started to go for a run, which seemed to be the soundtrack for that moment. Lots of the projects I have been involved with are about trying to make that happen, but by intent, and controlling it artistically.

When you hear from someone for whom that happened that's amazing, as they have not gone to an installation where everything is controlled and they have expectations, but instead, it happened in their everyday life. It’s a much more personal interaction in people's lives. Those are the most exciting things, but they are harder and way more ambitious.

Q. Yet, you create new ways for people to experience music.

It is working in such a way that you go off the rails of what’s a ‘normal’ musical experience. Instead of staying on previously laid rail tracks where you can only go where someone has gone before, I throw down the tracks in front of me as I go. It can get a bit intense!

David Bowie said that you need to be a bit out of your depth to be doing something good. It is then that you know you are probably doing something good, or at least interesting. I think the balance is to never be so ambitious that you can’t maintain musicality. Bowie was completely right in that you need to go beyond where you're comfortable. You have to be slightly uncomfortable in the creative process to do something good, and I think he did that at a number of points in his life in various ways, and not just with technology. He completely anticipated many issues around the devaluation of music.

I think it's a privilege to be working in this area because you're seeing the edge of where we are. There will always be challenges and constraints in what can and can't be done, but constraints are what make good creativity.

A lot of the problem with the music-making process at the moment is that we have too many technological choices. You can make a track in a normal DAW with loads of plugins that you could use in many different ways, and then freeze them and turn them into audio and use more plugins on that, and then mix them. The possibilities become overwhelming.

So with all these technological options people often say, ‘OK, well I'm going to limit my creative possibilities artificially’. Artificially bring them down. What I do – which I think is different – is I go to a place artistically and conceptually where it is already very hard to achieve my ideas, so I don't have the freedom to limit myself. I move my creative, conceptual aspirations into a space which is constrained creatively because it's innovative, which I think is a much healthier thing to do than imposing arbitrary, artificial constraints. Although the hard thing is it means you need to become technically aware in order to do it.

The second part of this interview can be found here.