An interview with Interaction Designer Arthur Carabott Part II

Dom Aversano

The Coca-Cola Beatbox Pavilion from the 2012 London Olympic Games

This is Part II of an interview with interaction designer Arthur Carabott. In Part I Arthur discussed how after studying music technology at Sussex University he found a job working on the Coca-Cola Beatbox Pavilion in the 2012 Olympic Games. What follows is his description of how the work evolved. 

Did you conceive that project in isolation or collaboration?

The idea had already been sold and the architects had won the competition. What was known was there would be something musical because Mark Ronson was going to be making a song. So the idea was to build a giant instrument from a building, which everyone could play by waving their hands over giant pads. They wanted to use sports sounds and turn them into music while having a heartbeat play throughout the building, tying everything together.

Then it came down to me playing with ideas, trying things out, and them liking things or not liking things. We knew that we had five or six athletes and a certain number of interactive points on the building.

So it was like, okay, let’s break it down into sections. We can start with running or with archery or table tennis. That was the broad structure, which helped a lot because we could say we have 40 interactive points, and therefore roughly eight interactions per sport.

Did you feel you were capable of doing this? How would you advise someone in a similar position?

Yeah, I was 25 when this started. While it’s difficult to give career advice, one thing I hold onto is saying yes to things that you’ve never done before but you kind of feel that you could probably do. If someone said we want you to work on a spaceship I’d say that’s probably a bad idea, but this felt like a much bigger version of things that I’d already done.

There were new things I had to learn, especially working at that scale. For instance, making the system run fast enough and building a backup system. I’d never done a backup system. I had just used my laptop in front of my class or for an installation. So I definitely learning things.

If I have any natural talent it’s for being pretty stubborn about solving problems and sticking at it like a dog with a bone. Knowing that I can, if I work hard at this thing, pull it off. That was the feeling.


Arthur Carabott rehearing at the Apple shop with Chagall van den Berg

How did you get in contact with Apple?

I was a resident in the Music Hackspace then and rented a desk in Somerset House. Apple approached Music Hackspace about doing a talk for their Today at Apple series.

I already had a concept for a guerrilla art piece, where the idea was to make a piece of software where I could play music in sync across lots of physical devices. The idea was to go around the Apple store and get a bunch of people to load up this page on as many devices as we could, and then play a big choir piece by treating each device as a voice.

Kind of like a flash mob?

Yeah, sort of. It was inspired by an artist who used to be based in New York called Kyle McDonald, who made a piece called People Staring at Computers. His program would detect faces and then take a photo of them and email it to him. He installed this in the New York Apple stores and got them to send him photos. He ended up being investigated by the Secret Service, who came to his house and took away his computers.

However, for my thing, I wanted to bring a musician into it. Chagall was a very natural choice for the Hackspace. For the music I made an app where people could play with the timbre parameters of a synth, but with a quite playful interface which had faces on it.

How did you end up working with the composer Anna Meredith? You built an app with her, right?

Yes, an augmented reality app. It came about through a conversation with my friend, Marek Bereza, who founded Elf Audio and makes the Koala sampler app. We met up for a coffee and talked about the new AR stuff for iPhones. The SDK had just come to the iPhones and it had this spatial audio component. We were just knocking around ideas of what could be done with it.

I got excited about the fact that it could give people a cheap surround sound system by placing virtual objects in their space. Then you have — for free, or for the cost of an app — a surround sound system.

There was this weekly tea and biscuits event at Somerset House where I saw Anna Meredith and said, ‘Hey, you know, I like your music and I’ve got this idea. Could I show it to you and see what you think?’ So I came to her studio and showed her the prototype and we talked it through. It was good timing because she had her album FIBS in the works. She sent me a few songs and we talked back and forth about what might work for this medium. We settled on the piece Moon Moons, which was going to be one of the singles.

It all came together quite quickly. The objects in it are actual ceramic sculptures that her sister Eleanor made for the album. So I had to teach myself how to do photogrammetry and 3D scan them, before that technology was good on phones.

Augmented reality app build for Anna Merediths album FIBS

You moved to LA. What has that been like?

It was the first time I moved to another country without a leaving date. London’s a great city. I could have stayed, and that would have been the default setting, but I felt like I took myself off the default setting.

So, I took a trip to LA to find work and I was trying to pull every connection I could. Finding people I could present work to, knocking on doors, trying to find people to meet. Then I found this company Output and I was like, ‘Oh, they seem like a really good match’. They’re in LA and they have two job openings. They had one software developer job and one product designer job.

I wrote an email and an application to both of these and a cover letter which said: Look, I’m not this job and I’m not that job. I’m somewhere in the middle. Do you want me to be doing your pixel-perfect UI? That’s not me. Do you want me to be writing optimized audio code? That’s not me either. However, here’s a bunch of my work and you can hear all these things that I can do.

I got nothing. Then I asked Jean Baptise from Music Hackspace if he knew any companies. He wrote an email to Output introducing me and I got a meeting.

I showed my work. The interviewer wrote my name on a notebook and underlined it. When I finished the presentation I looked at his notebook and he hadn’t written anything else. I was like, ‘Okay, that’s a very good sign or very bad sign’. But I got the job.

How do you define what you do?

One of the themes of my career is that has been a double-edged sword is it not being specifically one thing. In the recruitment process what they do is say we have a hole in our ship, and we need someone who can plug it. And very rarely are companies in a place where they think, we could take someone on who’s interesting, but we don’t have an explicit problem for them to solve right now, but we think they could benefit what we’re doing.

The good thing is I find myself doing interesting work without fitting neatly into a box that people can understand. My parents have no idea what I do really.

However, I do have a term I like, but it’s very out of fashion, which is interaction designer. What that means is to play around with interaction, almost like behaviour design.

You can’t do it well without having something to play with and test behaviours with. You can try and simulate it in your head, but generally, you’re limited to what you already know. For instance, you can imagine how a button works in your head, but if you imagine what would happen if I were to control this MIDI parameter using magnets, you can’t know what that’s like until you do it.

What are your thoughts on machine learning and AI? How that will affect music technology?

It’s getting good at doing things. I feel like people will still do music and will keep doing music. I go to a chess club and chess had a boom in popularity, especially during the pandemic. In terms of beating the best human player that has been solved for decades now, but people still play because people want to play chess, and they still play professionally. So it hasn’t killed humans wanting to play chess, but it’s definitely changed the game.

There is now a generation who have grown up playing against AIs and it’s changed how they play, and that’s an interesting dynamic. The interesting thing with music is, it has already been devalued. People barely pay anything for recorded music, but people still go to concerts though concert tickets are more expensive than ever people are willing to pay.

I think the thing that people are mostly interested in with music is the connection, the people, the personal aspect of it. Seeing someone play music, seeing someone very good at an instrument or singing is just amazing. It boosts your spirits. You see this in the world of guitar. A new guitarist comes along and does something and everyone goes, ‘Holy shit, why has no one done that before’?

Then you have artists like Squarepusher and Apex Twin who their own patches to cut up their drum breaks. But they’re still using their own aesthetic choice of what they use. I’m not in the camp that if it’s not 100% played by a human on an instrument, then it’s not real music.

The problem with the word creativity is it has the word create in it. So I think a lot of the focus goes on the creation of materials, whereas a lot of creativity is about listening and the framing of what’s good. It’s not just about creating artefacts. The editorial part is an important part of creativity. Part of what someone like Miles Davis did is to hear the future.

You can find out more about Arthur Carabott on his websiteInstagram, and X

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work on his website, Liner Notes, X, and Instagram.

An interview with Interaction Designer Arthur Carabott Part I

Dom Aversano

If ever evidence was needed of the power of DNA it was demonstrated to me just over a decade ago, when I walked into a room in a dingy industrial estate in Hoxton, East London, to attend one of the early Music Hackspace meet-ups, and much to my surprise saw my cousin, Arthur Carabott, twirling on an office chair, listening to a presentation on ultrasonic sound.

The population in London at that point was roughly 8 million, and there were fewer than 20 people in that room — the odds of us both being there were minuscule. Although we both grew up in an extended family that was very musical, we came to the Music Hackspace by entirely distinct routes, at a time when it was little more than a charming and eccentric fringe group.

Having known Arthur since childhood, it’s not surprising to me that he ended up working in a field that combines artistic and technical skills. He always approached technical problems with a rare tenacity and single-mindedness. Several times I saw Arthur receive a Mechano toy for a birthday or Christmas, only to sit quietly for hours on end working on it until it was finally built.

The Music Hackspace played a significant part in both our formations, so I was curious to know about his experience of this. What surprised me was how much I did not know about Arthur’s journey through music.

What follows is a transcript of that conversation — Part II will follow shortly.

What drew you to music?

There was always music playing in the house. In my family, there was the expectation that you’d play an instrument. I did violin lessons at 7, which I didn’t enjoy and then piano aged 10. I remember being 10 or 11 and there was a group, there were a bunch of us that liked Queen. They are an interesting band because they appeal to kids. They’re theatrical, and some of it is quite clown-like. Then I remember songs like Cotton Eye Joe and singers like Natalie Imbruglia, you know, quite corny music — I’ve definitely got a corny streak. But there was this significant moment one summer when I was probably 11 or so, and I discovered this CD with a symbol on it that was all reflective. It was OK Computer, by Radiohead. That summer made a big musical impact. It’s an album I still listen to.

How does music affect you?

I think music, food, and comedy are quite similar in that when it’s good, there’s no denying it. Of course, with all three, you can probably be a bit pretentious and be like, ‘Oh no, I am enjoying this’ when you’re not. But those are three of my favourite things in the world.

I heard a comedian talking about bombing recently. They said if a musician has an off night, and they get on stage, they don’t play well it’s still music. Whereas if a comedian goes up and they bomb, and no one laughs, it’s not comedy.

You became a very accomplished guitarist. Why did you not choose that as a career?

I went to guitar school and there was a point in my teens when my goal was to become the best guitarist in the world. I remember something Squarepusher had on his website once, where he wrote about being a teenager and giving up on the idea of trying to be like his classmate Guthrie Govan, who is now one of the world’s best guitarists. I resonated with that as there’s a point where you’re like, okay, I’m never gonna do that.

Part of my problem was being hypermobile and therefore prone to injuries, which stopped me from practising as much as I wanted to. Yet, there was still this idea that when I went to Sussex University and studied music informatics with Nick Collins I was going to go there, learn Supercollider, and discover the secrets that Squarepusher and Aphex Twin used. Someone told me they don’t even cut up their drum loops, they’ve got algorithms to do it!

I was actually signed up to do the standard music degree but my friend Alex Churchill said to change it to music informatics as it will change your life. That was a real turning point.

In what way?

What clicked was knowing I enjoyed programming and I wasn’t just going to use music software — I was going to make it.

The course was rooted in academia and experimental art practice rather than commercial things like building plugins. We were looking at interactive music systems and generative music from 2006 – 2009, way before this wave of hype had blown up. We doing some straight-up computer science stuff, and early bits of neural networks and genetic algorithms. Back then we were told, that no one’s really found practical uses for this yet.

We studied people like David Cope, who was an early pioneer who spent decades working on AI music. All these things helped me think outside conventional ways when it came to traditional music tech stuff, and the paradigms of plug-ins, DAWs, and so on.

Today at Apple event where over 100 iPhones and iPads were synchronised for a live performance with singer and producer Chagall

What did you do with this training and how did it help you?

I had no idea what I was going to do afterwards. I was offered a position in the first year of the Queen Mary M.A .T. Media, Art and Technology PhD, but I was a bit burnt out on academia and wanted to do the straight music thing.

I volunteered at The Vortex in London as a sound engineer. I had done paid work at university in Brighton but mostly for teenage punk bands. The Vortex being unpaid worked better because it meant that I only did it for gigs I wanted to see. I was already into Acoustic Ladyland, but there I discovered bands like Polar Bear and got to know people like Seb Rochford and Tom Skinner. I admired their music and also got to interact with and get to know them.

How did you come across Music Hackspace and how did it influence you?

I’d heard there was this thing on Hackney Road. I remember going on a bit of a tour because they would do open evenings and I went with a group of people. It felt like the underground. The best music tech minds in London. A bit of a fringe thing, slightly anarchist and non-mainstream. Music Hackspace was for me mostly about connecting to other people and a community.

What led you to more technical, installation-type work?

I remember seeing Thor Magnussen who had been doing his PhD at Sussex while I was in my undergrad and he taught one of our classes. He was talking about doing an installation and I remember thinking, I don’t really know what an installation is. How do I get one?

Then came the opportunity to work on the 2012 Olympics which came through my sister Juno, and her boyfriend at the time Tim Chave who introduced me to the architects Asif Khan and Pernilla Ohrstedt. I met them and showed them a bunch of like fun things that I’d made, like an app which took Lana Del Rey’s single Video Game and let you remix it in real time. You could type in every word contained in the song, hit enter, and she would sing it, remixed, in time with the beat.

They asked me various technical questions but after the meeting, I didn’t hear anything for a while. Then got a call in December 2011 from Asif. He asked, ‘Can you go to Switzerland next week?’ And I’m like, ‘Wait, am I doing this project? Have I got the job?’ He responded, ‘Look, can you go to Switzerland next week?’ So I said ‘Okay, yeah’.

So then it became official. It was six days a week for six months to get it done in time for the Olympics.


The Coca-Cola Beatbox Pavilion from the 2012 London Olympic Games

Part II of this interview will follow shortly. 

You can find out more about Arthur Carabott on his website, Instagram, and X

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.

Competition – Win one year’s free membership to Music Hackspace

Dom Aversano

We are giving away a year’s free membership – to enter, all you have to do is leave a comment on this page about at least one composer or musician who has greatly influenced your approach to computer music.

We want to know two things.

  1. How has their music affected or influenced you?

  2. An example of a piece of their music you like, and a short description of why.

Anyone who completes the above will be entered into the competition on an equal basis (you are welcome to list more than one person, but this will not improve your chances of winning) with the winner assigned at random and announced on Saturday 4th of November via the Music Hackspace newsletter.

To get the ball rolling, I will provide two examples.

Kaija Saariaho / Vers le blanc

I arrived somewhat late to Kaija Saariaho’s music, attending my first live performance of her music two years prior to her death this year, yet despite this, her music has greatly influenced me in the short time I have known it.

Although I have not heard the piece in full (since it has never been released) the simple 1982 electronic composition by Saariaho, Vers le blanc, captured my imagination.

The composition is a 15-minute glissando from one tone cluster (ABC) to another (DEF). Saariaho used electronic voices to produce this. The composition raises questions about what is perceptible. For instance, can the change in pitch be heard from moment to moment? Can it be sensed over longer time periods?

The piece made me question what can be considered music. Are they notes if they never fix on a pitch? can such a simple process over 15 minutes be artistically enjoyable to listen to? what would be the ideal circumstance to listen to such music? I experienced this music partly as an artistic object of study and meditation and partly as a philosophical provocation. 

Burial / Come Down to Us

Burial’s idiosyncratic approach to technology gives rise to a unique sound. He famously stated in a 2006 interview that he used Soundforge to create his music, without the use of any multitrack sequencing or quantisation. This stripped-down use of technology gives the music an emotional directness and a more human feel.

I find his track Come Down to Us particularly inspiring. At 13 minutes long it uses a two-part binary form for the structure. The composition uses audio samples from a transgender person, and it was only after a few years of listening that it occurred to me that the form might describe the subject. At 7 minutes the entire mood and sound of the track changes from apprehensive to triumphant, potentially describing a person undergoing — or having undergone — a psychological or physical transition. Released in 2013, this was long before the divisive culture wars and undoubtedly intended simply as an artistic exploration. 

Leave your comment below to enter the competition. Please refer to the guidelines above. The winner will be announced on Saturday 4th of November via the Music Hackspace newsletter. 

Strategies experts use to learn programming languages

Dom Aversano

"U.S. Army Photo" from the archives of the ARL Technical Library. Left: Betty Jennings (Mrs. Bartik), right: Frances Bilas (Mrs. Spence).

Learning a programming language – not least of all one’s first language – can feel intimidating, especially when observing others doing complex tasks with apparent ease. Furthermore, the circumstances in which one learns can vary greatly. One person might be 19 years old and entering a degree program with plenty of free time, while another is moonlighting on an old computer between childcare and other responsibilities. Regardless of our circumstances, we can adopt an attitude and approach to learning that allows us to make the best use of the time we have. What follows is some advice with tips from some leading music programmers and artists. 

Enjoy learning

It might sound trite, but it is essential. It is easy to motivate ourselves to do something we love. If learning is enjoyable you will do more with greater focus and energy. Create a beautiful environment to work in, inspiring projects to develop, and desirable long-term goals that are ambitious enough to keep you practising regularly. Create the conditions in which action comes naturally, since to borrow the words of Pablo Picasso, ‘Action is the foundational key to all success.’

Some people like learning by exploring and modifying existing code written by others. I envy them because I think they move faster. However I find more pleasure in learning from the ground up so I understand every line of code in my project. My preference is to follow a tutorial (e.g. Dan Shiffman’s) and do small exercises. – Tim Murray Browne

Learn through projects

We learn by doing. Tutorials are essential, but if they are not complemented with the development of projects you might experience ‘tutorial fatigue’, losing motivation and inspiration amid a constant reel of videos. Start with simple programs you can build quickly before working up to more complex ones. Small and simple is beautiful. 

I have a folder where I document and store all my ideas for projects. I write everything down in plain language describing what the program will do without any consideration for how it will work. Only after this do I give some consideration to how the program might work architecturally, before deciding if I should create it now, wait, or simply store it as an idea. Even if I never create the project, documenting my ideas demonstrates they have a value I would not entrust to just memory.

Love the one you’re with

It is better to learn one language expertly than five shallowly. Take time to decide what you want to learn rather than impulsively jumping in, after all, you might spend thousands of hours with the program so you want it to align with your character and needs. Give yourself a realistic amount of time to learn it before embarking on another language, unless you genuinely have the time to learn languages simultaneously. 

I learned Pure Data partly because I was attracted to the way it looked. That might seem superficial but I know visual aesthetics affect me, and if I was going to look at a program for hundreds or thousands of hours I wanted to like its appearance. I now prefer traditional code, but my love for Pure Data and its black-and-white simplicity taught me to think as a coder. 

Do not worry about being mocked for asking questions – asking others for help builds relationships, strengthens the community, and can even lead to employment. If people want to put you down for asking basic questions, it says more about them than about you, so always reach out! – Elise Plans

Build a physical library

A friend who worked as a programmer for a big technology company advised me not to read books about programming, arguing that learning to program is non-linear and therefore unsuited to books. This did not work for me. We all have the same access to digital information, but physical libraries reflect our interests, priorities, and values, and act as private mental spaces. 

Although Daniel Shiffman’s books and the SuperCollider book are available for free online, I bought physical copies as I find reading from paper conducive to a quieter, less distracted, and more reflective state of mind. As it happens I often read the books in a non-linear manner, reading the chapter that seems most appealing or relevant to me at that time. My library extends out in different directions, containing musicology and biography, as well as physics and philosophy, yet all feel somehow connected. 

Read other people’s code

A revelation for most people learning to code is that there is rarely a single correct way to do something. Coding is a form of self-expression that reflects our theories and models of the world, and as with all creative activities, we eventually develop a style. Reading other people’s code gives you exposure to other approaches, allowing us to understand and even empathise with their creative world. Just as when we learn a foreign language we read books to help us, reading code allows us to internalise the grammar and style of good code. 

Music technology and programming may seem limitless in possibility – but you quickly find limitations if you step outside of conventional concepts of what music has been defined as before. So if you aren’t running up against limitations, it’s likely you aren’t thinking in a way which is original or ambitious enough. – Robert Thomas

Be wary of the promises of AI

Machine learning is impressive, but as Joseph Weizenbaum’s famous program ELIZA created at MIT in 1964-66 demonstrated, we have a potentially dangerous tendency to project mental capabilities onto machines that do not exist. 

While learning SuperCollider I used ChatGPT to help with some problems. After the initial amazement at receiving coherent responses from a machine using natural language, a more sober realisation came to me that the code often contained basic errors, invented syntax, and impractical solutions that a beginner might not recognise as such. It was obvious to me that ChatGPT did not understand Supercollider in the meaningful sense that expert programmers did. 

Machine learning is undoubtedly going to influence the world hugely, and coding not least of all, but the current models have a slick manner of offering poor code with absolute confidence. 

Photo by Robin Parmar
For mistakes that I may have made – lots of them! All the time. It’s probably cliche to say, but understanding your mistakes can be the best way to learn something. Although you come to think of them less as mistakes and more as happy accidents. Sometimes typing the “wrong” value can actually give you an interesting sound or pattern that you weren’t intending but pushes you in a new creative direction. – Lizze Wilson, Digital Selves

Hopefully, some of the ideas and advice in this article have been helpful. There are of course as many ways to learn a programming language as there are people, but regardless of the path, there is always a social element to learning and collaboration. And in that spirit, if you have any advice or ideas that you would like to share, please feel free to do so in the comments below.

Build a MIDI 2.0 program using the Apple UMP API – Workshop 2 / December 6th

Date & Time: Monday 6th December 2021 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

This workshop builds on the first UMP Workshop, and focuses on C++ development using the new Apple UMP API. Automatic 20% discount will be applied at checkout to this workshop if purchased at the same time as the first workshop.


Difficulty level: Advanced

  • Inspect the new Apple UMP API
  • What can be done with the API, where are limitations?
  • Build a simple UMP program in C++


This workshop builds on Workshop 1, and will provide developers with knowledge and code for implementing MIDI 2.0 Universal MIDI Packet (UMP) development using the Apple UMP API in C++. The Apple UMP API will be presented and explained. Then, the participants will co-develop a simple implementation in C++ using the Apple UMP API. For that, a stub workspace will be provided. Exercises will let the participants practice the newly learned concepts. Xcode on MacOS 11 required for building the workshop code.

Learning outcomes

At the end of the workshop the participants will:

  • Be able to build MIDI 2.0 products using UMP using the Apple UMP API

Study Topics

  • Looking at the Apple UMP API
  • Extending the code from Workshop 1 with Apple i/o
  • Presenting fragments of the code in the stub workspace
  • Testing and interoperability with MIDI 1.0

Level of experience required

  • Attendees who joined workshop 1 <add link>
  • Some experience with C++ coding required
  • Attendees should be familiar with MIDI 1.0; they should have experience building and debugging applications using Xcode (macOS)

Any technical requirements for participants 

  • A computer and internet connection
  • A webcam and mic
  • A Zoom account
  • for development: Xcode on MacOS 11

About the workshop leader 

Florian Bomers runs his own company Bome Software, creating MIDI tools and hardware. He has been an active MIDI 2.0 working group member since its inception. He serves on the Technical Standards Board of the MIDI Association and chairs the MIDI 2.0 Transports Working Group. He is based in Munich, Germany.

MIDI 2.0 – Introduction to the Universal MIDI Packet – Workshop 1 / November 29th

Date & Time: Monday 29th November 2021 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

This workshop is followed by two more workshops exploring the specific implementations with Apple UMP API and the JUCE UMP API (cross-platform). Automatic 20% discount on workshop 2 and/or 3 will be applied when purchased with this workshop.

2- hours

Difficulty level: Advanced

MIDI 2.0 is set to power the next generation of hardware and software with enhanced features for discovery, expression and faster communication. The Universal MIDI Packet (UMP) is a fundamental aspect of MIDI 2.0, which allows programs to negotiate and communicate with MIDI 1.0 and MIDI 2.0 products.

In this workshop, you will learn from a member of the MIDI Association Technology Standard Board, who wrote the specifications, how to get started working with UMP, and write a simple C++ program that utilises UMP.


This workshop will provide developers with knowledge and code for starting MIDI 2.0 Universal MIDI Packet (UMP) development in C++. The concepts of UMP will be explained. Then, the participants will co-develop a first simple implementation of a generic UMP parser in plain C++. For that, a stub workspace will be provided. Exercises will let the participants practice the newly learned concepts.

Who is this workshop for:

Developers wanting to learn how the new MIDI 2.0 packet format works under the hood, and how to get started writing software for it right away.

Learning outcomes

At the end of the workshop the participants will:

  • Understand the core concepts of UMP
  • Be able to build applications in C++ using UMP

Study Topics

  • UMP Basics
  • packet format
  • MIDI 1.0 in UMP
  • MIDI 2.0 in UMP
  • Translation
  • Protocol Negotiation in MIDI-CI
  • Inspecting the UMP C++ class in the stub workspace
  • A simple UMP parser in C++
  • Unit Testing the UMP class

Level of experience required: 

  • Some experience with C++ coding
  • Have a development environment set up and ready with Xcode (macOS) or Visual Studio (Windows).
  • Working knowledge of MIDI 1.0

Any technical requirements for participants 

  • A computer and internet connection
  • A webcam and mic
  • A Zoom account
  • Xcode (macOS) / Visual Studio (Windows)

About the workshop leader 

Florian Bomers runs his own company Bome Software, creating MIDI tools and hardware. He has been an active MIDI 2.0 working group member since its inception. He serves on the Technical Standards Board of the MIDI Association and chairs the MIDI 2.0 Transports Working Group. He is based in Munich, Germany.

Max meetup – October 23rd

Date & Time: Saturday 23rd October 2021 4pm UK / 5pm Berlin / 8am LA / 11am NYC

Meetup length 2-hours

Level: Open the all levels

Meetups are a great way to meet and be inspired by the Max community.

What to expect? 

The meetup runs via Zoom and will be approx. 2-hours in length.

This session focuses on <add topic> and will feature presentations from expert practitioners.


Michele Zaccagnini – Beyond Jitter: audiovisuals in Max using shaders

  • Overview: In this presentation I will demystify, or at least whet your appetite for, shaders in Max. I will also present a set of tools I helped develop to port MIDI and audio to shaders, and have them rendered in all sorts of formats. While shaders can be intimidating at first, they are incredibly powerful and offer enormous possibilities for the audiovisual composer. They are entirely run on the GPU and allow for completely flexible visual programming which is very suitable for abstract visuals.  After years of practicing audiovisual composition I believe that the Max+Shaders combo is simply delicious! 
  • More info 

Philip Meyer:  Modular Sequencing with Jamoma

  • Overview: I am in the process of building a modular system for creating dynamic musical sequences. This is the early stages of a long-term project for me to build a powerful environment in which I can create intricate, novel compositions as dynamic data systems, eschewing the need for a timeline. For this project, I decided to use the Jamoma package for the first time. This seems at present to have been a good decision – Jamoma’s “MVC” architecture is intuitive and clean, and the cueing system is working well so far. I am eager to show the group what I have made so far and gather any feedback, advice, or ideas the group may have. I’m particularly curious to hear the thoughts of anybody who has extensive experience with Jamoma. I might also be interested in bringing any collaborators or beta testers on to the project if anybody is so inclined.
  • More info: 
    • Bbandcamp: 

Following these presentations breakout rooms are created where you can:

  • Talk to the presenters and ask questions

  • Fancy a collaboration challenge? In one of the breakout rooms, host Ned Rush will be leading ‘Ready, Steady, Patch!’ sign up to learn more!

  • Show other participants your projects, ask for help, or help others out

  • Meet peers in the chill-out breakout room


  • A computer and internet connection
  • A Zoom account

Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

 Supported by Cycling ‘74

Getting started with Max – October Series

Date & Time: Wednesdays 6th / 13th / 20th / 27th October – 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

Length 2-hours

Level: Beginners curious about programming

Get started with interactive audio and MIDI, and discover the possibilities of the Max environment. In this series of workshops, you will learn how to manipulate audio, MIDI, virtual instruments and program your own interactive canvas.

Connect together Max’s building blocks to create unexpected results, and use them in your music productions. Through a series of guided exercises you will engage in the pragmatic creation of a basic MIDI sequencer device that features a wealth of musical manipulation options.

Learn from guided examples and live interactions with teachers and other participants.

This series of online workshops aims to enable you to work with Max confidently on your own.

Sessions overview: 

Session 1 – Understand the Max environment

Session 2 – Connect building blocks together and work with data

Session 3 – Master the user interface

Session 4 – Work with your MIDI instruments


  • A computer and internet connection

  • A good working knowledge of computer systems

  • Access to a copy of Max 8

About the workshop leader 

Kyle Duffield is a Toronto based Interactive Experience Design Professional who creates immersive interactive installations and brand activations. He is also known for his affiliation with the studio space Electric Perfume. His decade-plus expertise spans audio, video, creative coding, electronics, and interaction design with the intent of bringing play and multisensory spectacle to public spaces. As an Educator, he has facilitated interactive media courses and workshops with various institutions, galleries, and universities across Canada, Shanghai, the UK, and online. Currently, Kyle is a Cycling 74 Max Certified Trainer, and is focusing on creating unforgettable technological experiences.

TouchDesigner meetup – August 28th

Date & Time: Saturday 28th August 4pm UK / 5pm Berlin / 8am LA / 11am NYC

Level: Open to all levels

Meetups are a great way to meet and be inspired by the TouchDesigner community.

What to expect? 

The meetup runs via Zoom, the main session will be 2-hours in length with an additional hour open to the community for collaboration and sharing in breakout rooms.

This session focuses on The Future of TouchDesigner and New Media Art and will feature presentations from expert practitioners.

The meetup will be hosed by Bileam Tschepe, the theme for this meetup is ‘The Future Of Interactive Art’ and we’re pleased to confirm the lineup of speakers: 

Scottie J. Fox – The Live-Edited Experience

  • Bio: Scottie is a real-time mixed visual artist and software developer from Boston, USA – specializing in improv, moment arts, dance and augmented reality
  • Description: An exploration into the possibilities and frontier of real time illusion of performance arts using mixed reality of both real and digital effects to create a showcase of what is now available at the user level, where previously only achievable in post studio production
  • To find out more: ://  &

Karyn Nakamura – Interactive Experiments With Kinect

  • Bio: Karyn is 20 year old from Tokyo currently studying design at MIT! She mostly works in Touchdesigner or JavaScript and is a big fan of early net art, post hardcore music, and modern Japanese history
  • Description: Showing some examples of interactive experiments using motion tracking with Kinect as well as my personal future plans with the Kinect and other rising technology
  • To find out more:

Elburz Sorkhabi – Your Career In Interactive & Immersive Media

  • Bio: Elburz Sorkhabi is the co-founder of The Interactive & Immersive HQ and one of the top TouchDesigner developers in the world. He brings insight he has used to lead clients including Google, Kanye West, Netflix, TIFF, Burj Khalifa, Nike, Under Armour, and many more around the world from Los Angeles, New York, San Francisco, Toronto, Montreal, Dubai, Shanghai, Singapore, Tokyo, and Paris
  • Description: One of the hardest parts of making the art you dream about is actually having a career that allows you to be creative and dedicate yourself full-time to your craft. In this talk, Elburz breaks down common barriers to building a career by providing actionable advice and answering common questions about how to get gigs and make a living doing the work you love

Following these presentations breakout rooms are created where you can: 

  • Talk to the presenters and ask questions

  • Join a room on topics of your choice

  • Show other participants your projects, ask for help, or help others out

  • Collaborate with others

  • Meet peers in the chill-out breakout room


  • A computer and internet connection
  • A Zoom account

Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

Supported by TouchDesigner 

Build your own modular synth with MSP – On-demand

Level: Beginner/Intermediate

Learn to program patches with MSP to make a custom modular environment.

Cycling 74’s Max / MSP offers a vast playground of programming opportunities to create your own synthesis devices. In this series you will build custom modules to create your own modular synthesis environment. This series aims to provide you with suitable skills to begin exploring synthesis and UI design in the Max MSP environment.

Series Learning Outcomes

By the end of the series a successful student will be able to:

  • Build oscillator and filter networks with MSP objects

  • Build modulation patches with MSP objects

  • Build step sequencers with Max and MSP objects

  • Explore use of signal routing in interesting and creative ways using MSP objects.

  • Build custom modules using UI objects and bpatchers

Session 1 / 2 / 3 / 4

  • MSP objects for synthesis, filters and modulation 
  • MSP objects to control signal routing
  • UI objects and bpatchers
  • UI objects for sequencers



  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a copy of Max 8 (i.e. trial or full license)

About the workshop leader 

Ned Rush aka Duncan Wilson is a musician, producer and performer. He’s most likely known best for his YouTube channel, which features a rich and vast quantity of videos including tutorials, software development, visual art, sound design, internet comedy, and of course music.