How to design a music installation – an interview with Tim Murray-Browne (part 2)

Dom Aversano

How to design a music installation - an interview with Tim Murray-Browne (part 2)

In the first part of this interview, artist Tim Murray-Browne discussed his approach to creating interactive installations, and the importance of allowing space for the agency of the audience with a philosophy that blurs the traditional artist/audience dichotomy in favour of a larger-scale collaboration.

In the second part of this interview, we discuss how artificial intelligence and generative processes could influence music in the near future and the potential social and political implications of this, before returning to the practical matters of advice on how to build an interactive music installation and get it seen and heard.

I recently interviewed the composer and programmer Robert Thomas who envisions a future in which music behaves in a more responsive and indeterminate manner, more resemblant to software than the wax cylinder recording that helped define 20th-century music. In this scenario, fixed recording could become obsolete. Is this how you see the future?

I think the concept of the recorded song is here to stay. In the same way, I think the idea of the gig and concert is here to stay. There are other things being added on top and it may become less and less relevant as time goes on. Just in the way that buying singles has become less relevant even though we still listen to songs. 

I think the most important thing is having a sense of personal connection and ownership. This comes back to agency, where I feel I’m expressing myself through the relationship with this music or belonging to a particular group or community. What I think a lot of musicians and people who make interactive music can get wrong is since they take such joy and pleasure in being creatively expressive, they think they can somehow give that joy to someone else without figuring out how to give them some kind of personal ownership of what they’re doing.

As musicians it’s tempting to think we can make a track and then create an interactive version, and that someone’s going to listen to that interactive version of my track and remix it live or change aspects of it, and have this personalised experience that it is going to be even better because they had creative agency over it. 

I think there’s a problem with that because you’re asking people to do some of the creative work but without the sense of authorship or ownership. I may be wrong about this because in video games you definitely come as an audience and explore the game and develop skill and a personal style that gives you a really personal connection to it. But games and music are very different things. Games have measurable goals to progress through, and often with metrics. Music isn’t like that. Music is like an expanse of openness. There isn’t an aim to make the perfect music. You can’t say this music is 85% good.

How do you see the future?

I agree with Robert in some sense, but where I think we’re going to see the song decline in relevance has less to do with artists creating interactive versions of their work and more to do with people using AI to completely appropriate and remix existing musical works. When those tools become very quick and easy to use I think we will see the song transform into a meme space instead. I don’t see any way to avoid that. I think there will be resistance, but it is inevitable.

In the AI space, there are some artists who are seeing this coming and trying to make the most of it. So instead of trying to stop people from using AI to rip off their work, they’re trying to get a cut of it. Like say, okay you can use my voice but you’ll give me royalties. I’ve done all of this work to make this voice, it’s become like a kind of recognizable cultural asset and I know I’m going to lose control of it, but I want some royalties and to own the quality of this vocal timbre

Is there a risk in deskilling, or even populism, in a future where anyone can make profound changes to another person’s creative work? The original intention of copyright law was to protect artists’ work from falling out of their hands financially and aesthetically. The supposed democratisation of journalism has largely defunded and deskilled an important profession and created an economy for much less skilled influencers and provocateurs. Might not the same happen to music?

The question of democratisation is problematic. For instance, democracy is good, but there are consequences when you democratise the means of production, particularly in the arts where a big part of what we’re doing is essentially showing off. Once the means of production are democratised, then those who have invested in the skills previously needed lose that capacity to define themselves through them. Instead, everyone can do everything and for this short while, because we’re used to these things being scarce, it suddenly seems like we’ve all become richer. Then pretty soon, we find we’re all in a very crowded room trying to shout louder and louder. It’s like we were in a gig and we took away the stage and now we’re all expecting to have the same status that the musician on the stage had.

I can see your concerns with that, but when it comes to music transforming from being a produced thing to being very quickly made with AI tools by people who aren’t professional. If you’re a professional musician there will still be winners and losers, and those winners and losers will in part be those who are good at using the tools. There will be those with some kind of artistic vision. And there’ll be those who are good at social media and networking, and good at understanding how to make things go viral. 

It’s not that different from how music is now. It takes more than musical talent to become a successful artist as a musician, you’ve got to build relationships with your fans, you have to do all of these other things which maybe you could get away with not doing so much in the past.

Let’s return to the original theme of what makes for a good installation. What advice would you give to someone in the same position now that you were in just over a decade ago when starting Cave of Sounds?

In 2012 when we started building Cave of Sounds Music Hackspace was a place for people to build things. This was fundamental for me. People there were making software and hardware and there was this sort of default attitude of ‘we built it, now we’re going to show somebody’. We’re going to get up in the front of the room and I’m going to talk to you about this thing, and maybe I’ll play some music on it.

I find the term installation problematic because it comes from this world of the art gallery and of having a space and doing something inside the space where it can’t necessarily just be reduced to a sculpture or something. Whereas, for me, it was just a useful word to describe a musical device where the audience is going to be actively interacting with it, rather than sitting down and watching a professional interact with it. So that shift from a musician on a stage to an audience participating in the work.

I don’t think it necessarily has to begin with a space. It needs a curiosity of interaction. Maybe I’m just projecting what I feel, but what I observed at Music Hackspace is people taking so much enjoyment in building things, and less time spent performing them. Some people really want to get up and perform as musicians. Some people really want to build stuff for the pleasure of building. 

How do you get an installation out into the world?

How to get exhibited is still an ongoing mystery to me, but I will say that having past work that has succeeded means people are more likely to accept new work based on a diagram and description. Generally, having a video of a piece makes it much more likely for people to want to show it. The main place things are shown is in festivals, more than galleries or museums. Getting work into a festival is a question of practical logistics: How many people are going to experience it and how much space and resources does it demand? And then festivals tend to conform to bigger trends – sometimes a bit too much I think as then they end up all showing quite similar works. When we made Cave of Sounds, DIY hacker culture and its connection to grassroots activism was in the air. Today, the focus is the environment, decolonisation, and social justice. Tomorrow there will be other things.

Then, there’s a lot of graft, and a lot of that graft is much easier when you’re younger than when you’re older. I don’t think I could go through the Cave of Sounds process today like I did back then. I’m very happy I did it back then.

What specifically about the Cave of Sounds do you think made it work?

The first shocking success of Cave of Sounds is that when we built it we had like a team of eight, and I had a very small fee because I was doing this artist residency, but everyone else was a volunteer on that project or collaborating artists, but unpaid. And we worked together for eight months to bring it together.

A lot of people came to the first meeting but from the second meeting, the people who turned up from that point forward were the eight people making the work who stuck through to the end. I think there’s something remarkable about that. Something about the core idea of the work really resonated with those people, and I think we got really lucky with them. And there was a community that they were embedded in as well. But the fact that everyone might made it to the end, just like shows that there was something kind of magical in the nature of the work and the context of that combination of people.

So a work like Cave Sounds was possible because we had a lot of people who were very passionate, and we had a diversity of skills, but we also had like a bit of an institutional name behind us. We had a small budget as well, but the budget was very small, and most of the budget did not pay for the work. The budget covered some of the materials, really, but a significant amount of labour went into that piece, and it came from people working for passion.

Do you have a dream project or a desire for something you would like to do in the future?

For the past few years I’ve been exploring how to use AI to interpret the moving body so that I can create physical interaction without introducing any assumptions about what kind of movement the body can make. So if I’m making an instrument by mapping movement sensors to sound, I’m not thinking ‘OK this kind of hand movement should make that kind of sound’ but instead training an AI on many hours of sensor data where I’m just moving in my own natural way and asking it ‘What are the most significant movements here?’

I’m slightly obsessed with this process. It’s giving me a completely different feeling when I interact with the machine, like my actions are no longer mediated by the hand of an interaction designer. Of course, I’m still there as a designer, but it’s like I’m designing an open space for someone rather than boxes of tools. I think there’s something profoundly political about this shift, and I’m drawn to that because it reveals a way of applying AI to liberate people to be individually themselves, rather than using it to make existing systems even more efficient at being controlling and manipulative which seems to be the main AI risk I think we’re facing right now. I could go on more as well – moving from the symbolic to the embodied, from the rational to the intuitive. Computers before AI were like humans with only the left side of the brain. I think they make humans lose touch with their embodied nature. AI adds in the right side, and some of the most exciting shifts I think will be in how we interact with computers as much as what those computers can do autonomously.

So far, I’ve been exploring this with dancers, having them control sounds in real-time but still being able to dance as they dance rather than dancing like they’re trapped inside a land of invisible switches and trigger zones. And in my latest interactive installation Self Absorbed I’ve been using it to explore the latent space of other AI models, so people can morph through different images by moving their bodies. But the dream project is to expand this into a larger multi-person space, a combined virtual and physical realm that lets people influence their surroundings in all kinds of inexplicable ways by using the body. I want to make this and see how far people can feel a sense of connection with each other through full-body interfaces that are too complicated to understand rationally but are so rich and sensitive to the body that you can still find ways to express yourself.

To find out more about Tim Murray-Browne you can visit his website or follow him on Substack, Instagram, Mastodon, or X.

Ask Me Anything: Max MSP’s gen~and rnbo~

In this Ask Me Anything session, Massi Cerioni answers questions about Max MSP gen~ and rnbo~, which bridge the gap between Max prototyping and development, with powerful code export features.

How to design a music installation – an interview with Tim Murray-Browne (part 1)

Dom Aversano

How to design a music installation - an interview with Tim Murray-Browne (part 1)

I met artist and coder Tim Murray-Browne just over a decade ago, briefly after he was made artist in residence for Music Hackspace. Tall, thin, with a deep yet softly-spoken voice, he stood up and gave a presentation to an audience of programmers, academics, musicians, and builders, in a room buzzing with anticipation. The setting was a dingy studio in Hoxton, East London, prior to the full-on gentrification of the artistic neighbourhood.

Tim’s idea for a project was bold: He had no idea. Or to be more precise, his idea was to have no idea. Instead, the idea would emerge from a group. There were quizzical looks in the audience and questions to confirm indeed the idea was to have no idea. For an artistically audacious idea, this was a good audience, comprised as it was of open-minded, radical, and burningly curious people. By the meeting’s end an unspoken consensus of ‘let’s give this a go’ seemed to have quietly been reached.

Tim’s faith in his concept was ultimately vindicated since the installation that emerged from this process, Cave of Sounds, still tours to this day. Created by a core group of eight people — myself one of them — it has managed to stay relevant amid a slew of socio-political and technological changes. As an artist, Tim has continued to make installations, many focusing on dance, movement, and the human body, as well as more recently, AI.

I wanted to reflect back on this last decade, to see what had been learned, what had changed, what the future might hold, and above all else, how one goes about creating an installation.

What do you think are the most important things to consider when building an interactive installation?

First, you need some kind of development over time. I used to say narrative though I’m not sure if that is the right word anymore, but something needs to emerge within that musical experience. A pattern or structure that grows. Let’s say someone arrives by themselves, maybe alone in a room, and is confronted with something physical, material, or technological, and the journey to discover what patterns emerge has begun. Even though an installation is not considered a narrative form, any interaction is always temporal.

Second, has to do with agency. It’s very tempting as an artist to create a work and have figured out exactly what experience you want your audience to have and to think that that’s going to be an interactive experience even though you’ve already decided it. Then you spend all your time locking down everything that could happen in the space to make sure the experience you envisaged happens. I think if you do this you may as well have made a non-interactive artwork, as I believe the power of interactivity in art lies in the receiver having agency over what unfolds.

Therefore, I think the question of agency in music is fundamental. When we are in the audience watching music a lot of what we get out of it is witnessing someone express themselves skillfully. Take virtuosity, that comes down to witnessing someone have agency in a space and really do something with it.

How exactly do you think about agency in relation to installations?

In an interactive installation, it’s important to consider the agency of the person coming in. You want to ask, how much freedom are we going to give this person? How broad is the span of possible outcomes? If we’re doing something with rhythm and step sequencing are we going to quantise those rhythms so everything sounds like a techno track? Or are we going to rely on the person’s own sense of rhythm and allow them to decide whether to make it sound like a techno track or not?

It all comes down to the question of what is the point of it being interactive. While it is important to have some things be controllable, a lot of the pleasure and fun of interactive stuff is allowing for the unexpected, and therefore I find the best approach when building an installation is to get it in front of unknown people as soon as possible. Being open to the unexpected does not mean you cannot fail. An important reason for getting a work in front of fresh people is to understand how far they are getting into the work. If they don’t understand how to affect and influence the work then they don’t have any agency, and there won’t be any sense of emergence.

Can you describe music in your childhood? You say you sang in choirs from the age of six to twelve. What was your experience of that?

At the time it burnt me out a little but I’m very thankful for it today. It was very much tied to an institution. It was very institutional music and it was obligatory. I was singing in two to three masses a week and learning piano and percussion. I stopped when I was about 13. I had a few changes in life, we moved country for a little bit and I went to a totally different kind of school and environment. It wasn’t until a few years later that I picked up the piano again, and only really in the last couple of years have I reconnected with my voice.

Your PhD seemed to be a turning point for you and a point of re-entry into music. Can you describe your PhD, and how that influenced your life?

I began doing a PhD looking at generative music, and as I was trying to figure out what the PhD would be I had an opportunity to do a sound installation in these underground vaults in London Bridge Station with a random bunch of people in my research group. They were doing an installation there and someone had some proximity sensors I could use. There was an artist who had some projections which were going up and I made a generative soundscape for it. Being in the space and seeing the impact of that work in a spatial context really shifted my focus. I felt quite strongly that I wanted to make installations rather than just music, and I reoriented my PhD to figure out how to make it about that. I was also confronted with the gulf of expectation and reality in interactive art. I thought the interactivity was too obvious if anything, but then as I sat and watched people enter the space, most did not even realise the piece was interactive.

How do these questions sit with you today?

From an academic perspective, it was a really terrible idea because a PhD is supposed to be quite focused, and I was questioning how can you make interactive music more captivating. I had this sense in my head of what an interactive music experience could be, and it was as immersive, durational and gripping as a musical experience. Nearly every interactive sound work I was finding ended up being quite a brief experience – you kind of just work out all the things you can do and then you’re done.

I saw this pattern in my own work too. My experience in making interactive sound works was much more limited back then, but I saw a common pattern of taking processes from recorded music and making it interactive. My approach was to ask ‘Well what is music really? why do we like it?’ and all kinds of answers come up about emerging structures, belonging, and self-expression, so then the question was how can we create interactive works that embody those qualities within the interactivity itself.

What it left me with was not such a clear pathway into academia, because I hadn’t arrived at some clear and completed research finding, but what I had done was immersed myself so fundamentally in trying to answer this question, how can I make captivating interactive music experiences?f

What did you find?

On the question of interaction with technology, I think the most fundamental quality of technology is interaction, human-computer interaction. How is it affecting us? How are we affecting it? How does that ongoing relationship develop?

There is so much within those questions, and yet interactivity is often just tacked on to an existing artwork or introduced in a conventional way because that is how things are done. In fact, the way you do interactivity says a lot about who you are and how you see the world. How you design interaction is similar to how you make music, there are many ways, and each has a political interpretation that can be valuable in different contexts.

Who has influenced you in this respect?

The biggest influence on me at the point where I’d finished my PhD and commenced Cave of Sounds was the book Musicking by Christopher Small.

The shift in mindset goes from thinking that music is something being done by musicians on a stage and being received by everyone else around them, to being a collective act that everybody’s participating in together, and that if there weren’t an audience there to receive it the musician couldn’t be participating in the same music.

What I found informative is to take a relativist view on different musical cultures. Whether it is a rock concert, classical concert, folk session, or jazz jam, you can think of them as being different forms of this same thing, just with different parameters of where the agency is.

For instance, if you’re jamming with friends in a circle around a table there is space for improvisation and for everybody to create sound. This has an egalitarian nature to it. Whereas with an orchestra there is little scope for the musicians to choose what notes they play, but a huge scope for them to demonstrate technical virtuosity and skill, and I don’t think there’s anything wrong with that. I love orchestral music. I think there is beauty to the coordination and power. I can see how it could be abused politically, but it’s still a thing that I feel in my body when I experience it, and I want to be able to access that feeling.

What I’m most suspicious about are stadium-level concerts. The idolisation of one individual on a stage with everyone in the crowd going emotionally out of control. It is kind of this demagogue/mob relationship. People talk about these Trump rallies as if they’re like rock concerts, and it’s that kind of relationship that is abused politically.

You can read more of this interview in Part 2 which will follow shortly, where we discuss the future of music as well as practical advice for building installations. To find out more about Tim Murray-Browne you can visit his website or follow him on Instagram or X.

Exploring the 2023 MIDI Innovation Awards

Dom Aversano

In Jaron Lanier’s cult classic technology manifesto, You Are Not a Gadget, the writer outlines a concept he calls lock-in, which he defines as when via mass adoption a technology becomes so deeply embedded into a culture that it becomes difficult to either improve or remove it without massive effort, even if its design is fundamentally flawed. The British road system exemplifies this, with its twisting and turning narrow lanes designed for horse-drawn carts, which while somewhat charming relics of a bygone era, can make it impossible to create separated bike lanes without bulldozing entire sections of cities. Lanier provides another example, MIDI, which he perceives as a reductive and delimiting music language that shrinks our conception of music to the functioning of keyboards, yet nevertheless, that he predicts will persist as a language well into the future, due to the huge work it would take to extract it from our musical infrastructure.

More than a decade after Lanier’s book was published his prediction that MIDI would persist is vindicated, however, Lanier may have underestimated the extent to which, unlike the British road system, MIDI has the capacity to transform itself without a major uprooting, which is the intention of MIDI 2.0. Anyone who has followed the non-starter of Web 3.0 will know that technological advancement requires more than adding a number and a decimal place to an existing technology. However, this new version of MIDI offers genuinely new capabilities, such as bidirectionally, backwards compatibility, a finer resolution of detail, and the capacity for instruments to communicate with greater sophistication.

Browsing through the entrants and finalists on the MIDI Association website reminded me of the show-and-tell-type events Music Hackspace put on in its early days. There is a nice balance between slick and sophisticated products built by established companies and eccentric innovations made in a shed by a devoted individual. As is the nature of these things, most of the innovations will not make their way to the mass market (presuming they were designed for it at all) but this does not detract from the creative value of the work. It is inspiring to see people make the brave effort of taking ideas from their imagination and putting them into the real world, so providing an audience for their efforts helps motivate and stimulate this innovation, by demonstrating that it has value and importance in our culture.

For the last few days, I have had the pleasure of indulging in a kind of digital sauntering, where I have explored and browsed through the wonderful collection of innovations on display. One original-looking instrument that immediately caught my eye is the Abacusynth, which as its name suggests, is built in the style of an abacus. The synth is intended to emphasise musical timbre with its creator stating:

“Timbral modulation is arguably just as ‘musical’ as melody or rhythm, but it’s not often emphasized for someone learning music, usually due to the complexity of synthesizer interfaces”.

One aspect of its interface design that is ingenious is that by spinning the blocks it creates a modulation effect, aligning the visual and kinetic aspects of the instrument in a playful and intuitive way.

Another visually appealing instrument is the Beat Scholar, which uses a novel pizza-slice-type interface to subdivide rhythms, provoking the visual imagination and making the likes of quintuplets and septuplets subdivisions much less intimidating. It is a much more visually appealing representation of rhythm than your average piano roll sequencer, where the interface for advanced rhythms often feels like an afterthought. 

When it comes to slickness Roland’s AE-30 Aerophone Pro jumps out, with the company claiming it ‘the most fully-integrated and advanced MIDI wind controller ever created.’ It uses a saxophone key layout and mouthpiece and Bluetooth connection to free up players to move. It looks and sounds like a promising alternative to the keyboard and drum machine hegemony of electronic music, but will ultimately rely on the opinion of seasoned wind players as to whether it is adopted.

Finally, a music installation that stood out for its elegantly simple design is Sound Sculpture, which uses the collaboration and participation of a crowd to move glowing blocks around, which communicate their position to build a sequencer that creates a musical pattern. Watching people collaborate with strangers in this audio/visual artwork is particularly inspiring.

“This project utilizes 25 cubes in a space typically about the size of a half-basketball court. This spatial realization of composing, with blocks, allows multiple people to collaborate, co-compose as a community, and together create structures, rhythms, melodies, and harmonies.”

Whether you are in the depths of Argentina’s Patagonia or the buzzing metropolis of Lagos, you can join online to find out who the winners of this year’s MIDI Innovation Awards are, in a live-streamed 90 minutes show on Saturday, September 16th (10 am PDT / 1 pm EDT / 6 pm BST / 7 pm CET) that will be hosted by music Youtubers Tantacrul and Look Mum No Computer.

Ask me Anything about Visualizing Sound

Ask me Anything about Visualizing Sound

Ask me Anything about Interactive Installations

Ask me Anything about Interactive Installations

Creating soundtracks to transform the taste of wine

Dom Aversano

When I was asked to interview Soundpear I questioned if I was the right person for the job. The company specialises in composing music to enhance the flavour of wine at their tasting events in Greece, stating that they ‘meticulously design bespoke music to match the sensory profile of a paired product.’ I on the other hand am almost proudly philistine about wine, only drinking it at events and parties when it is put into my hand. I find the rituals and mystification of this ancient grape juice generally more off-putting than alluring, especially given how studies show doing as little as changing a cheap-looking label on a bottle for an expensive one or putting red dye into white wine is sufficient to change the opinions of even seasoned wine drinkers and sommeliers.

Yet, perhaps who better to do the interview than someone whose preferred notes are not the subtle hints of caramel, oak, or cherry, but the opening riff of John Coltrane’s Giant Steps.

Despite my scepticism, I was interested in talking to the company as the connection between music and taste is one that is rarely explored.

The three of us met on Zoom, each calling from a different European country. Asteris Zacharakis lives in Greece and is a researcher at the School of Music Studies at the Aristotle University of Thessaloniki, as well as an amateur winemaker, whereas Vasilis Paras is a music producer and multi-instrumentalist living outside of London. While the pair originally met playing in a band twenty years ago, their collaboration now involves Asteris hosting wine-tasting events in Greece, while Vasilis composes bespoke music for each variety of wine sampled.

Our conversation turns quickly to the science supporting the idea that the taste of wine can be enhanced by sound. Asteris has a passion for multimodal perception — a science that studies how our senses process in combination rather than in isolation. A famous example is the McGurk Effect, which shows that when a person sees a video of someone uttering a syllable (e.g., ga ga) but hears an overdub of a different-sounding syllable (e.g., ba ba) this sensory incongruence results in the perception of a third non-existing syllable (da da).

‘There is evidence that if you sit around a round table with no corners, it’s easier to come into agreement with your colleagues than if there are angles.’

Regarding how this could allow us to experience things differently, Asteris describes: ‘It’s been shown through research that by manipulating inputs from various senses we can obtain more complex and interesting experiences. This does not just work in the laboratory, it’s how our brains work.’

Soundpear treats the drinking of wine and listening to music as a unified experience, similar to how films unify moving images and music. I am curious how the science translates directly into Soundpear’s work since musicians and winemakers must have worked in this way for centuries — if only guided by intuition. Surely a person drinking wine on a beautiful hilltop village in the South of France while listening to a musician playing the violin is having a multimodal experience? Asteris is quick to clarify that far from being exclusive, multimodal perception occurs all the time, and is not dependent on some specialist scientific understanding.

‘Musicians become famous because they do something cognitively meaningful and potentially novel, but I doubt that in all but a few cases they’re informed by the science, and they don’t need to be. Take a painter and their art. If a neuroscientist goes and analyses what the painter is doing, they could come up with some rules of visual perception they believe the artist is taking advantage of. However, successful artists have an inherent understanding of the rules without having the scientific insight of a neuroscientist.’

Multimodal perception offers insights into how sound affects taste. For example, high notes can enhance the taste of sourness, while low notes enhance our sense of bitterness. Vasilis recounts how initially the duo had experimented with more complex recorded music but decided to strip things down and use simple electronic sounds.

‘We thought, why don’t we take this to the absolute basic level, like subtractive synthesis?”

“Let’s start with sine waves, and tweak them to see how people respond. What do they associate with sweetness? What do they associate with sourness, and how do these translate in the raw tone? Then people can generally agree certain sounds are sour. From that, we try to combine these techniques to create more complicated timbres that represent more complicated aromas, until we work our way up to a bottle of wine.’

Asteris joins in on this theme ‘For example, the literature suggests that we tend to associate a sweet taste or aroma with consonant timbres, whereas saltiness corresponds to more staccato music, and bitterness is associated with lower frequencies and rough textures. Based on this, we knew if we wanted to make the sonic representation of a cherry aroma it needed to be both sweet and sour. So we decided we should combine a dissonant component to add some sourness and at the same time a concordant component to account for the sweetness’.

They tested these sounds on each other but also experimented with participants. Asteris describes their process ‘From our library of sounds we pick some and perform experiments in an academic lab environment, to either confirm or disprove our hypotheses. Our sound-aroma correspondence assumptions are proven right in some cases, but in other cases where participants don’t agree with our assumed association, we discard it and say

“Okay, we thought that sound would be a good representative for this scent but apparently it’s not.”’

I ask if anyone can try out pairing their music with wine. Vasilis is hesitant about this, pointing out that while they have a publicly available playlist on YouTube, using it as intended would require listeners to seek out specific bottles of wine. When I ask if these could be interchangeable with other bottles he draws a comparison with film music, stating that while you could theoretically change one film score for another, it likely would clash.

At this point, I feel my initial resistance giving way. Suddenly the thought of basking in the Greek sun listening to music and drinking wine feels much more appealing — maybe being a wine philistine is overrated. What I find refreshing about the duo is they are not overplaying the science, but appear to actually be having fun combining their talents to explore a new field between taste and music. It is not the cynical banalisation of music that Spotify often promotes, using playlists with names like ‘Music for your morning coffee’. Rather than treating the experience as an afterthought Soundpear is designing its music specifically for it.

However, one question still lingers — I ask how much they believe their work can carry across cultures. Asteris accepts that neither the effect of the music nor the taste of the wine can be considered universal experiences and their appeal is largely an audience drawn from cultures considered Western. It is an honest answer, and not surprising given that rarely does either music or drink genuinely appeal to global audiences anyway, especially given that alcohol is illegal or taboo throughout much of the world.

So, what of the music?

Vasilis composes with a certain mellifluous euphoria reminiscent at times of Boards of Canada and the film composer Michael Giacchino’s soundtrack for Inside Out, though with a more minimalist timbral palette than either. The tone and mood seem appropriate for accompanying a feeling of tipsiness and disinhibition. I even detect a subtle narrative structure that I assume accompanies the opening of the bottle, the initial taste, and the aftertaste. It is not hard to imagine the music working in the context of a tasting session, and people enjoying themselves.

Soundpear appears to be attempting to broaden how we combine our senses with the goal of opening people up to new experiences, which regardless of whether you are interested in wine or not is undoubtedly interesting. It is an invitation to multidisciplinary collaboration since the principles applied to wine could just as easily be applied to coffee, architecture, or natural landscapes. The attention they bring to multimodal perception makes one question whether music could be used in new ways, and that can only be a good thing.

Music Hackspace will host a workshop with Soundpear on Friday 22nd September 6pm UK

The sound of wine: transform your wine-tasting experiences through music-wine pairing

Soundpear are planning a music-wine pairing event at the Winemakers of Northern Greece Association headquarters in Thessaloniki this October – so stay tuned for more details!

TouchDesigner Meetup: Chagall at the Barbican

TouchDesigner Meetup: Chagall at the Barbican

The sound of wine: transform your wine-tasting experiences through music-wine pairing

The sound of wine: transform your wine-tasting experiences through music-wine pairing

Ableton Live a fondo

Ableton Live a fondo