An interview with Interaction Designer Arthur Carabott Part I

Dom Aversano

If ever evidence was needed of the power of DNA it was demonstrated to me just over a decade ago, when I walked into a room in a dingy industrial estate in Hoxton, East London, to attend one of the early Music Hackspace meet-ups, and much to my surprise saw my cousin, Arthur Carabott, twirling on an office chair, listening to a presentation on ultrasonic sound.

The population in London at that point was roughly 8 million, and there were fewer than 20 people in that room — the odds of us both being there were minuscule. Although we both grew up in an extended family that was very musical, we came to the Music Hackspace by entirely distinct routes, at a time when it was little more than a charming and eccentric fringe group.

Having known Arthur since childhood, it’s not surprising to me that he ended up working in a field that combines artistic and technical skills. He always approached technical problems with a rare tenacity and single-mindedness. Several times I saw Arthur receive a Mechano toy for a birthday or Christmas, only to sit quietly for hours on end working on it until it was finally built.

The Music Hackspace played a significant part in both our formations, so I was curious to know about his experience of this. What surprised me was how much I did not know about Arthur’s journey through music.

What follows is a transcript of that conversation — Part II will follow shortly.

What drew you to music?

There was always music playing in the house. In my family, there was the expectation that you’d play an instrument. I did violin lessons at 7, which I didn’t enjoy and then piano aged 10. I remember being 10 or 11 and there was a group, there were a bunch of us that liked Queen. They are an interesting band because they appeal to kids. They’re theatrical, and some of it is quite clown-like. Then I remember songs like Cotton Eye Joe and singers like Natalie Imbruglia, you know, quite corny music — I’ve definitely got a corny streak. But there was this significant moment one summer when I was probably 11 or so, and I discovered this CD with a symbol on it that was all reflective. It was OK Computer, by Radiohead. That summer made a big musical impact. It’s an album I still listen to.

How does music affect you?

I think music, food, and comedy are quite similar in that when it’s good, there’s no denying it. Of course, with all three, you can probably be a bit pretentious and be like, ‘Oh no, I am enjoying this’ when you’re not. But those are three of my favourite things in the world.

I heard a comedian talking about bombing recently. They said if a musician has an off night, and they get on stage, they don’t play well it’s still music. Whereas if a comedian goes up and they bomb, and no one laughs, it’s not comedy.

You became a very accomplished guitarist. Why did you not choose that as a career?

I went to guitar school and there was a point in my teens when my goal was to become the best guitarist in the world. I remember something Squarepusher had on his website once, where he wrote about being a teenager and giving up on the idea of trying to be like his classmate Guthrie Govan, who is now one of the world’s best guitarists. I resonated with that as there’s a point where you’re like, okay, I’m never gonna do that.

Part of my problem was being hypermobile and therefore prone to injuries, which stopped me from practising as much as I wanted to. Yet, there was still this idea that when I went to Sussex University and studied music informatics with Nick Collins I was going to go there, learn Supercollider, and discover the secrets that Squarepusher and Aphex Twin used. Someone told me they don’t even cut up their drum loops, they’ve got algorithms to do it!

I was actually signed up to do the standard music degree but my friend Alex Churchill said to change it to music informatics as it will change your life. That was a real turning point.

In what way?

What clicked was knowing I enjoyed programming and I wasn’t just going to use music software — I was going to make it.

The course was rooted in academia and experimental art practice rather than commercial things like building plugins. We were looking at interactive music systems and generative music from 2006 – 2009, way before this wave of hype had blown up. We doing some straight-up computer science stuff, and early bits of neural networks and genetic algorithms. Back then we were told, that no one’s really found practical uses for this yet.

We studied people like David Cope, who was an early pioneer who spent decades working on AI music. All these things helped me think outside conventional ways when it came to traditional music tech stuff, and the paradigms of plug-ins, DAWs, and so on.

Today at Apple event where over 100 iPhones and iPads were synchronised for a live performance with singer and producer Chagall

What did you do with this training and how did it help you?

I had no idea what I was going to do afterwards. I was offered a position in the first year of the Queen Mary M.A .T. Media, Art and Technology PhD, but I was a bit burnt out on academia and wanted to do the straight music thing.

I volunteered at The Vortex in London as a sound engineer. I had done paid work at university in Brighton but mostly for teenage punk bands. The Vortex being unpaid worked better because it meant that I only did it for gigs I wanted to see. I was already into Acoustic Ladyland, but there I discovered bands like Polar Bear and got to know people like Seb Rochford and Tom Skinner. I admired their music and also got to interact with and get to know them.

How did you come across Music Hackspace and how did it influence you?

I’d heard there was this thing on Hackney Road. I remember going on a bit of a tour because they would do open evenings and I went with a group of people. It felt like the underground. The best music tech minds in London. A bit of a fringe thing, slightly anarchist and non-mainstream. Music Hackspace was for me mostly about connecting to other people and a community.

What led you to more technical, installation-type work?

I remember seeing Thor Magnussen who had been doing his PhD at Sussex while I was in my undergrad and he taught one of our classes. He was talking about doing an installation and I remember thinking, I don’t really know what an installation is. How do I get one?

Then came the opportunity to work on the 2012 Olympics which came through my sister Juno, and her boyfriend at the time Tim Chave who introduced me to the architects Asif Khan and Pernilla Ohrstedt. I met them and showed them a bunch of like fun things that I’d made, like an app which took Lana Del Rey’s single Video Game and let you remix it in real time. You could type in every word contained in the song, hit enter, and she would sing it, remixed, in time with the beat.

They asked me various technical questions but after the meeting, I didn’t hear anything for a while. Then got a call in December 2011 from Asif. He asked, ‘Can you go to Switzerland next week?’ And I’m like, ‘Wait, am I doing this project? Have I got the job?’ He responded, ‘Look, can you go to Switzerland next week?’ So I said ‘Okay, yeah’.

So then it became official. It was six days a week for six months to get it done in time for the Olympics.

 

The Coca-Cola Beatbox Pavilion from the 2012 London Olympic Games

Part II of this interview will follow shortly. 

You can find out more about Arthur Carabott on his website, Instagram, and X

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.

How to design a music installation – an interview with Tim Murray-Browne (part 2)

Dom Aversano

How to design a music installation - an interview with Tim Murray-Browne (part 2)

In the first part of this interview, artist Tim Murray-Browne discussed his approach to creating interactive installations, and the importance of allowing space for the agency of the audience with a philosophy that blurs the traditional artist/audience dichotomy in favour of a larger-scale collaboration.

In the second part of this interview, we discuss how artificial intelligence and generative processes could influence music in the near future and the potential social and political implications of this, before returning to the practical matters of advice on how to build an interactive music installation and get it seen and heard.

I recently interviewed the composer and programmer Robert Thomas who envisions a future in which music behaves in a more responsive and indeterminate manner, more resemblant to software than the wax cylinder recording that helped define 20th-century music. In this scenario, fixed recording could become obsolete. Is this how you see the future?

I think the concept of the recorded song is here to stay. In the same way, I think the idea of the gig and concert is here to stay. There are other things being added on top and it may become less and less relevant as time goes on. Just in the way that buying singles has become less relevant even though we still listen to songs. 

I think the most important thing is having a sense of personal connection and ownership. This comes back to agency, where I feel I’m expressing myself through the relationship with this music or belonging to a particular group or community. What I think a lot of musicians and people who make interactive music can get wrong is since they take such joy and pleasure in being creatively expressive, they think they can somehow give that joy to someone else without figuring out how to give them some kind of personal ownership of what they’re doing.

As musicians it’s tempting to think we can make a track and then create an interactive version, and that someone’s going to listen to that interactive version of my track and remix it live or change aspects of it, and have this personalised experience that it is going to be even better because they had creative agency over it. 

I think there’s a problem with that because you’re asking people to do some of the creative work but without the sense of authorship or ownership. I may be wrong about this because in video games you definitely come as an audience and explore the game and develop skill and a personal style that gives you a really personal connection to it. But games and music are very different things. Games have measurable goals to progress through, and often with metrics. Music isn’t like that. Music is like an expanse of openness. There isn’t an aim to make the perfect music. You can’t say this music is 85% good.

How do you see the future?

I agree with Robert in some sense, but where I think we’re going to see the song decline in relevance has less to do with artists creating interactive versions of their work and more to do with people using AI to completely appropriate and remix existing musical works. When those tools become very quick and easy to use I think we will see the song transform into a meme space instead. I don’t see any way to avoid that. I think there will be resistance, but it is inevitable.

In the AI space, there are some artists who are seeing this coming and trying to make the most of it. So instead of trying to stop people from using AI to rip off their work, they’re trying to get a cut of it. Like say, okay you can use my voice but you’ll give me royalties. I’ve done all of this work to make this voice, it’s become like a kind of recognizable cultural asset and I know I’m going to lose control of it, but I want some royalties and to own the quality of this vocal timbre

Is there a risk in deskilling, or even populism, in a future where anyone can make profound changes to another person’s creative work? The original intention of copyright law was to protect artists’ work from falling out of their hands financially and aesthetically. The supposed democratisation of journalism has largely defunded and deskilled an important profession and created an economy for much less skilled influencers and provocateurs. Might not the same happen to music?

The question of democratisation is problematic. For instance, democracy is good, but there are consequences when you democratise the means of production, particularly in the arts where a big part of what we’re doing is essentially showing off. Once the means of production are democratised, then those who have invested in the skills previously needed lose that capacity to define themselves through them. Instead, everyone can do everything and for this short while, because we’re used to these things being scarce, it suddenly seems like we’ve all become richer. Then pretty soon, we find we’re all in a very crowded room trying to shout louder and louder. It’s like we were in a gig and we took away the stage and now we’re all expecting to have the same status that the musician on the stage had.

I can see your concerns with that, but when it comes to music transforming from being a produced thing to being very quickly made with AI tools by people who aren’t professional. If you’re a professional musician there will still be winners and losers, and those winners and losers will in part be those who are good at using the tools. There will be those with some kind of artistic vision. And there’ll be those who are good at social media and networking, and good at understanding how to make things go viral. 

It’s not that different from how music is now. It takes more than musical talent to become a successful artist as a musician, you’ve got to build relationships with your fans, you have to do all of these other things which maybe you could get away with not doing so much in the past.

Let’s return to the original theme of what makes for a good installation. What advice would you give to someone in the same position now that you were in just over a decade ago when starting Cave of Sounds?

In 2012 when we started building Cave of Sounds Music Hackspace was a place for people to build things. This was fundamental for me. People there were making software and hardware and there was this sort of default attitude of ‘we built it, now we’re going to show somebody’. We’re going to get up in the front of the room and I’m going to talk to you about this thing, and maybe I’ll play some music on it.

I find the term installation problematic because it comes from this world of the art gallery and of having a space and doing something inside the space where it can’t necessarily just be reduced to a sculpture or something. Whereas, for me, it was just a useful word to describe a musical device where the audience is going to be actively interacting with it, rather than sitting down and watching a professional interact with it. So that shift from a musician on a stage to an audience participating in the work.

I don’t think it necessarily has to begin with a space. It needs a curiosity of interaction. Maybe I’m just projecting what I feel, but what I observed at Music Hackspace is people taking so much enjoyment in building things, and less time spent performing them. Some people really want to get up and perform as musicians. Some people really want to build stuff for the pleasure of building. 

How do you get an installation out into the world?

How to get exhibited is still an ongoing mystery to me, but I will say that having past work that has succeeded means people are more likely to accept new work based on a diagram and description. Generally, having a video of a piece makes it much more likely for people to want to show it. The main place things are shown is in festivals, more than galleries or museums. Getting work into a festival is a question of practical logistics: How many people are going to experience it and how much space and resources does it demand? And then festivals tend to conform to bigger trends – sometimes a bit too much I think as then they end up all showing quite similar works. When we made Cave of Sounds, DIY hacker culture and its connection to grassroots activism was in the air. Today, the focus is the environment, decolonisation, and social justice. Tomorrow there will be other things.

Then, there’s a lot of graft, and a lot of that graft is much easier when you’re younger than when you’re older. I don’t think I could go through the Cave of Sounds process today like I did back then. I’m very happy I did it back then.

What specifically about the Cave of Sounds do you think made it work?

The first shocking success of Cave of Sounds is that when we built it we had like a team of eight, and I had a very small fee because I was doing this artist residency, but everyone else was a volunteer on that project or collaborating artists, but unpaid. And we worked together for eight months to bring it together.

A lot of people came to the first meeting but from the second meeting, the people who turned up from that point forward were the eight people making the work who stuck through to the end. I think there’s something remarkable about that. Something about the core idea of the work really resonated with those people, and I think we got really lucky with them. And there was a community that they were embedded in as well. But the fact that everyone might made it to the end, just like shows that there was something kind of magical in the nature of the work and the context of that combination of people.

So a work like Cave Sounds was possible because we had a lot of people who were very passionate, and we had a diversity of skills, but we also had like a bit of an institutional name behind us. We had a small budget as well, but the budget was very small, and most of the budget did not pay for the work. The budget covered some of the materials, really, but a significant amount of labour went into that piece, and it came from people working for passion.

Do you have a dream project or a desire for something you would like to do in the future?

For the past few years I’ve been exploring how to use AI to interpret the moving body so that I can create physical interaction without introducing any assumptions about what kind of movement the body can make. So if I’m making an instrument by mapping movement sensors to sound, I’m not thinking ‘OK this kind of hand movement should make that kind of sound’ but instead training an AI on many hours of sensor data where I’m just moving in my own natural way and asking it ‘What are the most significant movements here?’

I’m slightly obsessed with this process. It’s giving me a completely different feeling when I interact with the machine, like my actions are no longer mediated by the hand of an interaction designer. Of course, I’m still there as a designer, but it’s like I’m designing an open space for someone rather than boxes of tools. I think there’s something profoundly political about this shift, and I’m drawn to that because it reveals a way of applying AI to liberate people to be individually themselves, rather than using it to make existing systems even more efficient at being controlling and manipulative which seems to be the main AI risk I think we’re facing right now. I could go on more as well – moving from the symbolic to the embodied, from the rational to the intuitive. Computers before AI were like humans with only the left side of the brain. I think they make humans lose touch with their embodied nature. AI adds in the right side, and some of the most exciting shifts I think will be in how we interact with computers as much as what those computers can do autonomously.

So far, I’ve been exploring this with dancers, having them control sounds in real-time but still being able to dance as they dance rather than dancing like they’re trapped inside a land of invisible switches and trigger zones. And in my latest interactive installation Self Absorbed I’ve been using it to explore the latent space of other AI models, so people can morph through different images by moving their bodies. But the dream project is to expand this into a larger multi-person space, a combined virtual and physical realm that lets people influence their surroundings in all kinds of inexplicable ways by using the body. I want to make this and see how far people can feel a sense of connection with each other through full-body interfaces that are too complicated to understand rationally but are so rich and sensitive to the body that you can still find ways to express yourself.

Cave of Sounds was created by Tim Murray-Browne, Dom Aversano, Sus Garcia, Wallace Hobbes, Daniel Lopez, Tadeo Sendon, Panagiotis Tigas, and Kacper Ziemianin with support from Music Hackspace, Sound and Music, Esmée Fairbairne Foundation, Arts Council England and British Council.

To find out more about Tim Murray-Browne you can visit his website or follow him on Substack, Instagram, Mastodon, or X.

Programming live video with Federico Foderaro (live-stream)

Federico Foderaro is an audiovisual composer, teacher and designer for interactive multimedia installations, author of the YouTube channel Amazing Max Stuff.

In this live-stream, Federico presents some of his live visual projects using particles systems. Join the live-stream to learn how to create stunning animated videos running at high performance. This free live-stream is followed by a series of 4 workshops starting 20th October led by Federico to learn in depth video programming.

His main interest is the creation of audiovisual works and fragments, where the technical research is deeply linked with the artistic output.
The main tool used in his production is the software Max/MSP from Cycling74, which allows for real-time programming and execution of both audio and video, and represents a perfect mix between problem-solving and artistic expression.

Beside his artistic work, Federico teaches the software Max/MSP, both online and in workshops in different venues. The creation of commercial audio-visual interactive installations is also a big part of his work life, having led in the years to satisfactory collaborations and professional achievements.

Mash Machine live-stream with the founders

Discover a new instrument in this live-stream and learn their design story.

Based in Tallinn, Estonia, the Mash Machine team has put together a kit version of the Reactable. While it looks similar to the Barcelona instrument, the software and sound engine is different. Mash Machine is designed as a social instrument, playing and meshing loops as physical objects are drawn onto the board.

Meet the founders in this live-stream and learn more about the technology and design process.

Participate and build your own Mash Machine loops!

Create loops and send them to Mash Machine at hello@mashmachines.com, they will be used during the presentation! Detailed instructions on producing content for Mash Machine – here

https://www.facebook.com/TheMashMachine

https://www.youtube.com/user/MashMachines

Andrew Leggo: Designing Instruments

You can learn the basics of building a musical instrument at a Summer camp. Just Google “straw flute” and you’ll build a flute in 5 minutes. But designing an instrument that others want to play, now, this is hard. Most musicians are not looking for a new instrument, and it’s a difficult task to convince them otherwise. After spending 10,000 hours practicing, professional musicians are not necessarily looking to start all over again.

Andrew Leggo started designing instruments shortly after graduating in the early 1980s. He was one of the designers behind the Roland AX-1 Keytar and has also designed studio equipment, mixing consoles, digital pianos and percussion controllers.

In this talk, Andrew shares his lifelong learnings as a creative designer, and the multiple parameters that one has to consider when designing an instrument.

Join Andrew live on 7th September, and ask questions on the chat!

 

About
Privacy