An interview with Interaction Designer Arthur Carabott Part II

Dom Aversano

The Coca-Cola Beatbox Pavilion from the 2012 London Olympic Games

This is Part II of an interview with interaction designer Arthur Carabott. In Part I Arthur discussed how after studying music technology at Sussex University he found a job working on the Coca-Cola Beatbox Pavilion in the 2012 Olympic Games. What follows is his description of how the work evolved. 

Did you conceive that project in isolation or collaboration?

The idea had already been sold and the architects had won the competition. What was known was there would be something musical because Mark Ronson was going to be making a song. So the idea was to build a giant instrument from a building, which everyone could play by waving their hands over giant pads. They wanted to use sports sounds and turn them into music while having a heartbeat play throughout the building, tying everything together.

Then it came down to me playing with ideas, trying things out, and them liking things or not liking things. We knew that we had five or six athletes and a certain number of interactive points on the building.

So it was like, okay, let’s break it down into sections. We can start with running or with archery or table tennis. That was the broad structure, which helped a lot because we could say we have 40 interactive points, and therefore roughly eight interactions per sport.

Did you feel you were capable of doing this? How would you advise someone in a similar position?

Yeah, I was 25 when this started. While it’s difficult to give career advice, one thing I hold onto is saying yes to things that you’ve never done before but you kind of feel that you could probably do. If someone said we want you to work on a spaceship I’d say that’s probably a bad idea, but this felt like a much bigger version of things that I’d already done.

There were new things I had to learn, especially working at that scale. For instance, making the system run fast enough and building a backup system. I’d never done a backup system. I had just used my laptop in front of my class or for an installation. So I definitely learning things.

If I have any natural talent it’s for being pretty stubborn about solving problems and sticking at it like a dog with a bone. Knowing that I can, if I work hard at this thing, pull it off. That was the feeling.

 

Arthur Carabott rehearing at the Apple shop with Chagall van den Berg

How did you get in contact with Apple?

I was a resident in the Music Hackspace then and rented a desk in Somerset House. Apple approached Music Hackspace about doing a talk for their Today at Apple series.

I already had a concept for a guerrilla art piece, where the idea was to make a piece of software where I could play music in sync across lots of physical devices. The idea was to go around the Apple store and get a bunch of people to load up this page on as many devices as we could, and then play a big choir piece by treating each device as a voice.

Kind of like a flash mob?

Yeah, sort of. It was inspired by an artist who used to be based in New York called Kyle McDonald, who made a piece called People Staring at Computers. His program would detect faces and then take a photo of them and email it to him. He installed this in the New York Apple stores and got them to send him photos. He ended up being investigated by the Secret Service, who came to his house and took away his computers.

However, for my thing, I wanted to bring a musician into it. Chagall was a very natural choice for the Hackspace. For the music I made an app where people could play with the timbre parameters of a synth, but with a quite playful interface which had faces on it.

How did you end up working with the composer Anna Meredith? You built an app with her, right?

Yes, an augmented reality app. It came about through a conversation with my friend, Marek Bereza, who founded Elf Audio and makes the Koala sampler app. We met up for a coffee and talked about the new AR stuff for iPhones. The SDK had just come to the iPhones and it had this spatial audio component. We were just knocking around ideas of what could be done with it.

I got excited about the fact that it could give people a cheap surround sound system by placing virtual objects in their space. Then you have — for free, or for the cost of an app — a surround sound system.

There was this weekly tea and biscuits event at Somerset House where I saw Anna Meredith and said, ‘Hey, you know, I like your music and I’ve got this idea. Could I show it to you and see what you think?’ So I came to her studio and showed her the prototype and we talked it through. It was good timing because she had her album FIBS in the works. She sent me a few songs and we talked back and forth about what might work for this medium. We settled on the piece Moon Moons, which was going to be one of the singles.

It all came together quite quickly. The objects in it are actual ceramic sculptures that her sister Eleanor made for the album. So I had to teach myself how to do photogrammetry and 3D scan them, before that technology was good on phones.

Augmented reality app build for Anna Merediths album FIBS

You moved to LA. What has that been like?

It was the first time I moved to another country without a leaving date. London’s a great city. I could have stayed, and that would have been the default setting, but I felt like I took myself off the default setting.

So, I took a trip to LA to find work and I was trying to pull every connection I could. Finding people I could present work to, knocking on doors, trying to find people to meet. Then I found this company Output and I was like, ‘Oh, they seem like a really good match’. They’re in LA and they have two job openings. They had one software developer job and one product designer job.

I wrote an email and an application to both of these and a cover letter which said: Look, I’m not this job and I’m not that job. I’m somewhere in the middle. Do you want me to be doing your pixel-perfect UI? That’s not me. Do you want me to be writing optimized audio code? That’s not me either. However, here’s a bunch of my work and you can hear all these things that I can do.

I got nothing. Then I asked Jean Baptise from Music Hackspace if he knew any companies. He wrote an email to Output introducing me and I got a meeting.

I showed my work. The interviewer wrote my name on a notebook and underlined it. When I finished the presentation I looked at his notebook and he hadn’t written anything else. I was like, ‘Okay, that’s a very good sign or very bad sign’. But I got the job.

How do you define what you do?

One of the themes of my career is that has been a double-edged sword is it not being specifically one thing. In the recruitment process what they do is say we have a hole in our ship, and we need someone who can plug it. And very rarely are companies in a place where they think, we could take someone on who’s interesting, but we don’t have an explicit problem for them to solve right now, but we think they could benefit what we’re doing.

The good thing is I find myself doing interesting work without fitting neatly into a box that people can understand. My parents have no idea what I do really.

However, I do have a term I like, but it’s very out of fashion, which is interaction designer. What that means is to play around with interaction, almost like behaviour design.

You can’t do it well without having something to play with and test behaviours with. You can try and simulate it in your head, but generally, you’re limited to what you already know. For instance, you can imagine how a button works in your head, but if you imagine what would happen if I were to control this MIDI parameter using magnets, you can’t know what that’s like until you do it.

What are your thoughts on machine learning and AI? How that will affect music technology?

It’s getting good at doing things. I feel like people will still do music and will keep doing music. I go to a chess club and chess had a boom in popularity, especially during the pandemic. In terms of beating the best human player that has been solved for decades now, but people still play because people want to play chess, and they still play professionally. So it hasn’t killed humans wanting to play chess, but it’s definitely changed the game.

There is now a generation who have grown up playing against AIs and it’s changed how they play, and that’s an interesting dynamic. The interesting thing with music is, it has already been devalued. People barely pay anything for recorded music, but people still go to concerts though concert tickets are more expensive than ever people are willing to pay.

I think the thing that people are mostly interested in with music is the connection, the people, the personal aspect of it. Seeing someone play music, seeing someone very good at an instrument or singing is just amazing. It boosts your spirits. You see this in the world of guitar. A new guitarist comes along and does something and everyone goes, ‘Holy shit, why has no one done that before’?

Then you have artists like Squarepusher and Apex Twin who their own patches to cut up their drum breaks. But they’re still using their own aesthetic choice of what they use. I’m not in the camp that if it’s not 100% played by a human on an instrument, then it’s not real music.

The problem with the word creativity is it has the word create in it. So I think a lot of the focus goes on the creation of materials, whereas a lot of creativity is about listening and the framing of what’s good. It’s not just about creating artefacts. The editorial part is an important part of creativity. Part of what someone like Miles Davis did is to hear the future.

You can find out more about Arthur Carabott on his websiteInstagram, and X

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work on his website, Liner Notes, X, and Instagram.

Getting Started with Max – June Series

Dates & Times: Wednesdays 2nd, 9th, 16th & 23rd of June 6pm UK / 7pm Berlin / 10am LA / 1pm NYC – 2 hours live sessions

Level: Beginners curious about programming

Get started with interactive audio and MIDI, and discover the possibilities of the Max environment. In this series of recorded videos, you will learn how to manipulate audio, MIDI, virtual instruments and program your own interactive canvas.

Connect together Max’s building blocks to create unexpected results, and use them in your music productions. Through a series of exercises you will engage in the pragmatic creation of a basic MIDI sequencer device that features a wealth of musical manipulation options.

Learn from guided examples.

This on demand content aims to enable you to work with Max confidently on your own.

Learning outcomes: 

  • Understand the Max environment

  • Connect building blocks together and work with data

  • Master the user interface

  • Work with your MIDI instruments

Requirements

  • A computer and internet connection

  • A good working knowledge of computer systems

  • A Zoom account

  • Access to a copy of Max 8

TouchDesigner meetup 17th April – Audio visualisation

Date & Time: Saturday 17th April 5pm – 7pm UK / 6pm – 8pm Berlin

Level: Open to all levels

Join the online meetup for expert talks on audio visualisation. Meet and be inspired by the TouchDesigner community.

The meetup runs via Zoom. The main session features short presentations from TouchDesigner users. Breakout rooms are created on the spot on specific topics, and you can request a new topic at any time.

The theme for this session is Audio visualisation, hosted by Bileam Tschepe with presentations from the community.

In the breakout rooms, you can share your screen to show other participants something you’re working on, ask for help, or help someone else.

Presenters:

Name: Ian MacLachlan
Title: Terraforming with MIDI
Bio: Bjarne Jensen is an experimental audio/visual artist from the Detroit area with an interest in creating interactive systems for spatial transformation.
Name: Jean-François Renaud
Title: Generating MIDI messages to synchronize sound and visual effect in TouchDesigner
Description : Instead of using the audio analysis strategy to affect the rendering, we are focusing on building small generative machines using the basic properties of notes (pitch, velocity), and we look at different means to manage triggering. At the end, the goal is still to merge and to make alive what you hear and what you see.
Bio: Interactive media professor at École des médias, UQAM, Montréal
Vimeohttps://vimeo.com/morpholux 
Name: Bileam Tschepe
Title: algorhythm – a first look into my software
Description: I’ve been working on a tool for audiovisual live performances and I’d like to share its current state and see if people are interested in collaborating and working with me
Bio: Berlin based artist and educator who creates audio-reactive, interactive and organic digital artworks, systems and installations in TouchDesigner, collaborating with and teaching people worldwide.
YouTube: Bileam Tschepe

Requirements

  • A Zoom account
  • A computer and internet connection

Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

Supported by

Non-linear strategies for composing with Live & M4L – On demand

Level: Intermediate – Advanced

The creative path is not a straight line. In this workshop, you will develop a workflow focused on experimental approaches utilizing randomization, stochastic methods, polymeters, polyrhythms and more using Live and M4L. Experimental audio processing and non-linear mixing activities will be included in the compositional process to create unique sound qualities as well as overcoming creative blocks.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Examine various forms of (non)linear compositional strategies

  • Identify approaches that provide musical contingency

  • Select Ableton’s techniques & M4L devices to use in the writing process

  • Design generative methods for complex compositional systems based on the Ableton and M4L environments

Session Study Topics

  • Randomization & Stochastic methods

  • Polymeters & polyrhythms

  • Racks, Audio & MIDI FXs chains

  • Max4Live LFO, Shaper, Buffer shuffler, Multimap pro

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a copy of Ableton Live 10 Suite, or Ableton Live 10 with a Max For Live license

About the workshop leader

Simone Tanda is a musician, producer, multi-media artist, tech consultant, and educator.

Based across London & Berlin he is currently creating music for his own project, as well as multidisciplinary artists, film, and commercials.

Visual Music Performance with Machine Learning – On demand

Level: Intermediate

In this workshop you will use openFrameworks to build a real-time audiovisual instrument. You will generate dynamic abstract visuals within openFrameworks and procedural audio using the ofxMaxim addon. You will then learn how to control the audiovisual material by mapping controller input to audio and visual parameters using the ofxRapid Lib add on.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Create generative visual art in openFrameworks

  • Create procedural audio in openFrameworks using ofxMaxim

  • Discuss interactive machine learning techniques

  • Use a neural network to control audiovisual parameters simultaneously in real-time

Session Study Topics

  • 3D primitives and perlin noise

  • FM synthesis

  • Regression analysis using multilayer perceptron neural networks

  • Real-time controller integration

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Installed version of openFrameworks

  • Downloaded addons ofxMaxim, ofxRapidLib

  • Access to MIDI/OSC controller (optional – mouse/trackpad will also suffice)

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Video Synthesis with Vsynth for Max – LIVE Session

Dates: Thursdays 4th / 11th / 18th / 25th February 2021 6pm GMT

Level: Intermediate +

Overview

In this series of 4 workshops, we’ll look at how to interconnect the different 80 modules that come with Vsynth, exploring video techniques and practices that can create aesthetics associated with the history of the electronic image but also complex patterns founded in some basic functions of nature.

Vsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity made it the perfect tool to introduce yourself to video synthesis and image processing. Since It can be connected to other parts of Max, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job.

Here’s what you’ll learn in each workshop:

Workshop 1:

Learn the fundamentals of digital video-synthesis by diving into the different video oscillators, noise generators, mixers, colorizers and keyers. By the end of this session students will be able to build simple custom video-synth patches with presets.

  • Video oscillators, mixers, colorizers.

Workshop 2: 

  • Modulations (phase, frequency, pulse, hue, among others).

In this workshop we will focus on the concept of modulation so that students can add another level of complexity to their patches. We’ll see the differences between modulating parameters of an image with simple LFOs or with other images. Some of the modulations we’ll cover are Phase, Frequency, Pulse Width, Brightness & HUE.

Workshop 3:

  • Filters/convolutions and video feedback techniques.

  • This 3rd workshop is divided in two. In the first half, we’ll go in depth in what actually means low or high frequencies in the image world. We’ll then use Low-pass and High-pass filters/convolutions in different scenarios to see how they affect different images.

  • In the second, half we’ll go through a lot of different techniques that uses the process of video-feedback. From simple “trails” effects to more complex reaction-diffusion like patterns!

Workshop 4:

  • Working with scenes and external controllers (audio, midi, arduino).

  • In this final workshop we’ll see how to bundle in just one file several Vsynth patches/scenes with presets for live situations. We’ll also export a patch as a Max for Live device and go in depth into “external control” in order to successfully control Vsynth parameters with audio, midi or even an Arduino.

Requirements

  • Intermediate knowledge of Max and Jitter

  • Have latest Max 8 installed

  • Basic knowledge of audio-synthesis and/or computer graphics would be useful

About the workshop leader

Kevin Kripper (Buenos Aires, 1991) is a visual artist and indie software developer. He’s worked on projects that link art, technology, education and toolmaking, which have been exhibited and awarded in different art and science festivals. Since 2012 he’s been dedicated to creating digital tools that extend the creative possibilities of visual artists and musicians from all over the world.

Experimental Audio FX in Max

Level: Intermediate

In this workshop you will build an experimental audio FX device that utilizes buffers to create a novel delay line. Experimental processing will be added to the signal path to provide unique sound design possibilities. This workshop aims to provide you with suitable skills to begin exploring building unique, novel and experimental audio FX devices in the Max MSP environment.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Identify MSP objects for building delay FX devices
  • Build delay line audio FX devices via buffer, record and groove
  • Build feedback and processing networks
  • Explore UI concepts and design

Requirements

  • A computer and internet connection
  • A good working knowledge of computer systems
  • An basic awareness of audio processing
  • Good familiarity with MSP
  • Access to a copy of Max 8 (i.e. trial or full license)

About the workshop leader

Ned Rush aka Duncan Wilson is a musician, producer and performer. He’s known best for his YouTube channel, which features a rich and vast quantity of videos including tutorials, software development, visual art, sound design, internet comedy, and of course music.

Artist workshop with Ned Rush: Live Sample Mangling in Max 8 – On demand

Max is Ned’s go to environment to realise concepts for sound design and performance that are not available in other programs.

In this 2-hour workshop you will learn ways to sample and loop incoming audio from the outside world. You will create a fresh sonic palette from mutating the sound, using a variety of techniques aimed at performance and improvisation, whilst also discussing and solving problems related to improvisation set-ups and how we can meet those needs.

You will explore a variety of ways to interact with sampled sound to find which method suits you best so you can realise your vision with a unique performance sampler.

Requirements

– Max 8

– Basic knowledge of Max

About the workshop leader

Ned Rush aka Duncan Wilson is a musician, producer and performer. He’s most likely known best for his YouTube channel, which features a rich and vast quantity of videos including tutorials, software development, visual art, sound design, internet comedy, and of course music.

Video synthesis with Vsynth workshop

Level: Intermediate

In this series of 4 2-hours workshop, Kevin Kripper, the author of Vsynth, explains  how to interconnect the different 80 modules that come with Vsynth, exploring video techniques and practices that can create aesthetics associated with the history of the electronic image but also complex patterns founded in some basic functions of nature.

Here’s what you’ll learn in each workshop:

Lesson 1: video oscillators, mixers, colorizers.

Lesson 2: modulations (pm, fm, pwm, hue, among others).

Lesson 3: filters/convolutions and video feedback techniques.

Lesson 4: working with presets, scenes, audio and midi.

Vsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity made it the perfect tool to introduce yourself to video synthesis and image processing. Since It can be connected to other parts of Max, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job.

Requirements

  • Basic knowledge of Max and Jitter
  • Have Max 8 installed
  • Familiarity with audio-synthesis or computer graphics would be useful.

About the workshop leader

Kevin Kripper (Buenos Aires, 1991) is a visual artist and indie software developer. He’s worked on several projects that link art, technology, education and toolmaking which has exhibited in festivals such as +CODE, Innovar, Wrong Biennale, MUTEK, among others. In 2016 he won first place at the Itaú Visual Arts Award with his work Deconstrucento. In addition, since 2012 he’s been dedicated to create digital tools that extend the creative possibilities of visual artists and musicians from all over the world. During 2017, he participated in the Toolmaker residency at Signal Culture (Owego, NY) and in 2018 received a mention in the Technology applied to Art category from the ArCiTec Award for the development of Vsynth.

https://www.instagram.com/vsynth74/

https://cycling74.com/articles/an-interview-with-kevin-kripper

About
Privacy