Music Hackspace Christmas Quiz

Dom Aversano

History

  1. Which 19th-century mathematician predicted computer-generated music?
  2. What early electronic instrument did Oliver Messiaen use in his composition Trois petites liturgies de la Présence Divine? 
  3. Who invented FM synthesis? 
  4. What was the first name of the French mathematician and physicist who invented Fourier’s analysis? 
  5. Oramics was a form of synthesis invented by which British composer?

Synthesis

  1. What is the name given to an acoustically pure tone?
  1. What music programming language was named after a particle accelerator?
  1. What synthesiser did The Beatles use on their 1969 album Abbey Road?
  1. What microtonal keyboard is played in a band scene in La La Land? 
  1. What are the two operators called in FM synthesis? 

Music 

  1. What was the name of the Breakbeat that helped define jungle/drum and bass?
  1. IRCAM is based in which city? 
  1. Hip hop came from which New York Neighbourhood?
  1. Which genre-defining electronic music record label originated in Sheffield?
  1. Sonor Festival happens in which city?

General 

  1. Who wrote the book Microsound? 
  1. Who wrote the composition Kontakte? 
  1. How many movements is John Cage’s 4’33”?
  1. Who wrote the book On the Sensation of Tone?
  1. Which composer wrote the radiophonic work Stilleben?

Scroll down for the answers!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Answers

History

  1. Ida Lovelace
  2. Ondes Martenot
  3. John Chowning
  4. Joseph 
  5. Daphne Oram 

Synthesis

  1. Sine wave
  2. Supercollider
  3. The Moog
  4. Seaboard
  5. Carrier and Modulator 

Music 

  1. Amen Brother
  2. Paris
  3. The Bronx
  4. Warp Records
  5. Sonar

General

  1. Curtis Roads
  2. Karlheinz Stockhausen
  3. Three
  4. Hermann von Helmholtz
  5. Kaija Saariaho

How to design a music installation – an interview with Tim Murray-Browne (part 1)

Dom Aversano

How to design a music installation - an interview with Tim Murray-Browne (part 1)

I met artist and coder Tim Murray-Browne just over a decade ago, briefly after he was made artist in residence for Music Hackspace. Tall, thin, with a deep yet softly-spoken voice, he stood up and gave a presentation to an audience of programmers, academics, musicians, and builders, in a room buzzing with anticipation. The setting was a dingy studio in Hoxton, East London, prior to the full-on gentrification of the artistic neighbourhood.

Tim’s idea for a project was bold: He had no idea. Or to be more precise, his idea was to have no idea. Instead, the idea would emerge from a group. There were quizzical looks in the audience and questions to confirm indeed the idea was to have no idea. For an artistically audacious idea, this was a good audience, comprised as it was of open-minded, radical, and burningly curious people. By the meeting’s end an unspoken consensus of ‘let’s give this a go’ seemed to have quietly been reached.

Tim’s faith in his concept was ultimately vindicated since the installation that emerged from this process, Cave of Sounds, still tours to this day. Created by a core group of eight people — myself one of them — it has managed to stay relevant amid a slew of socio-political and technological changes. As an artist, Tim has continued to make installations, many focusing on dance, movement, and the human body, as well as more recently, AI.

I wanted to reflect back on this last decade, to see what had been learned, what had changed, what the future might hold, and above all else, how one goes about creating an installation.

What do you think are the most important things to consider when building an interactive installation?

First, you need some kind of development over time. I used to say narrative though I’m not sure if that is the right word anymore, but something needs to emerge within that musical experience. A pattern or structure that grows. Let’s say someone arrives by themselves, maybe alone in a room, and is confronted with something physical, material, or technological, and the journey to discover what patterns emerge has begun. Even though an installation is not considered a narrative form, any interaction is always temporal.

Second, has to do with agency. It’s very tempting as an artist to create a work and have figured out exactly what experience you want your audience to have and to think that that’s going to be an interactive experience even though you’ve already decided it. Then you spend all your time locking down everything that could happen in the space to make sure the experience you envisaged happens. I think if you do this you may as well have made a non-interactive artwork, as I believe the power of interactivity in art lies in the receiver having agency over what unfolds.

Therefore, I think the question of agency in music is fundamental. When we are in the audience watching music a lot of what we get out of it is witnessing someone express themselves skillfully. Take virtuosity, that comes down to witnessing someone have agency in a space and really do something with it.

How exactly do you think about agency in relation to installations?

In an interactive installation, it’s important to consider the agency of the person coming in. You want to ask, how much freedom are we going to give this person? How broad is the span of possible outcomes? If we’re doing something with rhythm and step sequencing are we going to quantise those rhythms so everything sounds like a techno track? Or are we going to rely on the person’s own sense of rhythm and allow them to decide whether to make it sound like a techno track or not?

It all comes down to the question of what is the point of it being interactive. While it is important to have some things be controllable, a lot of the pleasure and fun of interactive stuff is allowing for the unexpected, and therefore I find the best approach when building an installation is to get it in front of unknown people as soon as possible. Being open to the unexpected does not mean you cannot fail. An important reason for getting a work in front of fresh people is to understand how far they are getting into the work. If they don’t understand how to affect and influence the work then they don’t have any agency, and there won’t be any sense of emergence.

Can you describe music in your childhood? You say you sang in choirs from the age of six to twelve. What was your experience of that?

At the time it burnt me out a little but I’m very thankful for it today. It was very much tied to an institution. It was very institutional music and it was obligatory. I was singing in two to three masses a week and learning piano and percussion. I stopped when I was about 13. I had a few changes in life, we moved country for a little bit and I went to a totally different kind of school and environment. It wasn’t until a few years later that I picked up the piano again, and only really in the last couple of years have I reconnected with my voice.

Your PhD seemed to be a turning point for you and a point of re-entry into music. Can you describe your PhD, and how that influenced your life?

I began doing a PhD looking at generative music, and as I was trying to figure out what the PhD would be I had an opportunity to do a sound installation in these underground vaults in London Bridge Station with a random bunch of people in my research group. They were doing an installation there and someone had some proximity sensors I could use. There was an artist who had some projections which were going up and I made a generative soundscape for it. Being in the space and seeing the impact of that work in a spatial context really shifted my focus. I felt quite strongly that I wanted to make installations rather than just music, and I reoriented my PhD to figure out how to make it about that. I was also confronted with the gulf of expectation and reality in interactive art. I thought the interactivity was too obvious if anything, but then as I sat and watched people enter the space, most did not even realise the piece was interactive.

How do these questions sit with you today?

From an academic perspective, it was a really terrible idea because a PhD is supposed to be quite focused, and I was questioning how can you make interactive music more captivating. I had this sense in my head of what an interactive music experience could be, and it was as immersive, durational and gripping as a musical experience. Nearly every interactive sound work I was finding ended up being quite a brief experience – you kind of just work out all the things you can do and then you’re done.

I saw this pattern in my own work too. My experience in making interactive sound works was much more limited back then, but I saw a common pattern of taking processes from recorded music and making it interactive. My approach was to ask ‘Well what is music really? why do we like it?’ and all kinds of answers come up about emerging structures, belonging, and self-expression, so then the question was how can we create interactive works that embody those qualities within the interactivity itself.

What it left me with was not such a clear pathway into academia, because I hadn’t arrived at some clear and completed research finding, but what I had done was immersed myself so fundamentally in trying to answer this question, how can I make captivating interactive music experiences?f

What did you find?

On the question of interaction with technology, I think the most fundamental quality of technology is interaction, human-computer interaction. How is it affecting us? How are we affecting it? How does that ongoing relationship develop?

There is so much within those questions, and yet interactivity is often just tacked on to an existing artwork or introduced in a conventional way because that is how things are done. In fact, the way you do interactivity says a lot about who you are and how you see the world. How you design interaction is similar to how you make music, there are many ways, and each has a political interpretation that can be valuable in different contexts.

Who has influenced you in this respect?

The biggest influence on me at the point where I’d finished my PhD and commenced Cave of Sounds was the book Musicking by Christopher Small.

The shift in mindset goes from thinking that music is something being done by musicians on a stage and being received by everyone else around them, to being a collective act that everybody’s participating in together, and that if there weren’t an audience there to receive it the musician couldn’t be participating in the same music.

What I found informative is to take a relativist view on different musical cultures. Whether it is a rock concert, classical concert, folk session, or jazz jam, you can think of them as being different forms of this same thing, just with different parameters of where the agency is.

For instance, if you’re jamming with friends in a circle around a table there is space for improvisation and for everybody to create sound. This has an egalitarian nature to it. Whereas with an orchestra there is little scope for the musicians to choose what notes they play, but a huge scope for them to demonstrate technical virtuosity and skill, and I don’t think there’s anything wrong with that. I love orchestral music. I think there is beauty to the coordination and power. I can see how it could be abused politically, but it’s still a thing that I feel in my body when I experience it, and I want to be able to access that feeling.

What I’m most suspicious about are stadium-level concerts. The idolisation of one individual on a stage with everyone in the crowd going emotionally out of control. It is kind of this demagogue/mob relationship. People talk about these Trump rallies as if they’re like rock concerts, and it’s that kind of relationship that is abused politically.

Cave of Sounds was created by Tim Murray-Browne, Dom Aversano, Sus Garcia, Wallace Hobbes, Daniel Lopez, Tadeo Sendon, Panagiotis Tigas, and Kacper Ziemianin with support from Music Hackspace, Sound and Music, Esmée Fairbairne Foundation, Arts Council England and British Council.

You can read more of this interview in Part 2 which will follow shortly, where we discuss the future of music as well as practical advice for building installations. To find out more about Tim Murray-Browne you can visit his website or follow him on SubstackInstagramMastodon, or X.

Creating soundtracks to transform the taste of wine

Dom Aversano

When I was asked to interview Soundpear I questioned if I was the right person for the job. The company specialises in composing music to enhance the flavour of wine at their tasting events in Greece, stating that they ‘meticulously design bespoke music to match the sensory profile of a paired product.’ I on the other hand am almost proudly philistine about wine, only drinking it at events and parties when it is put into my hand. I find the rituals and mystification of this ancient grape juice generally more off-putting than alluring, especially given how studies show doing as little as changing a cheap-looking label on a bottle for an expensive one or putting red dye into white wine is sufficient to change the opinions of even seasoned wine drinkers and sommeliers.

Yet, perhaps who better to do the interview than someone whose preferred notes are not the subtle hints of caramel, oak, or cherry, but the opening riff of John Coltrane’s Giant Steps.

Despite my scepticism, I was interested in talking to the company as the connection between music and taste is one that is rarely explored.

The three of us met on Zoom, each calling from a different European country. Asteris Zacharakis lives in Greece and is a researcher at the School of Music Studies at the Aristotle University of Thessaloniki, as well as an amateur winemaker, whereas Vasilis Paras is a music producer and multi-instrumentalist living outside of London. While the pair originally met playing in a band twenty years ago, their collaboration now involves Asteris hosting wine-tasting events in Greece, while Vasilis composes bespoke music for each variety of wine sampled.

Our conversation turns quickly to the science supporting the idea that the taste of wine can be enhanced by sound. Asteris has a passion for multimodal perception — a science that studies how our senses process in combination rather than in isolation. A famous example is the McGurk Effect, which shows that when a person sees a video of someone uttering a syllable (e.g., ga ga) but hears an overdub of a different-sounding syllable (e.g., ba ba) this sensory incongruence results in the perception of a third non-existing syllable (da da).

‘There is evidence that if you sit around a round table with no corners, it’s easier to come into agreement with your colleagues than if there are angles.’

Regarding how this could allow us to experience things differently, Asteris describes: ‘It’s been shown through research that by manipulating inputs from various senses we can obtain more complex and interesting experiences. This does not just work in the laboratory, it’s how our brains work.’

Soundpear treats the drinking of wine and listening to music as a unified experience, similar to how films unify moving images and music. I am curious how the science translates directly into Soundpear’s work since musicians and winemakers must have worked in this way for centuries — if only guided by intuition. Surely a person drinking wine on a beautiful hilltop village in the South of France while listening to a musician playing the violin is having a multimodal experience? Asteris is quick to clarify that far from being exclusive, multimodal perception occurs all the time, and is not dependent on some specialist scientific understanding.

‘Musicians become famous because they do something cognitively meaningful and potentially novel, but I doubt that in all but a few cases they’re informed by the science, and they don’t need to be. Take a painter and their art. If a neuroscientist goes and analyses what the painter is doing, they could come up with some rules of visual perception they believe the artist is taking advantage of. However, successful artists have an inherent understanding of the rules without having the scientific insight of a neuroscientist.’

Multimodal perception offers insights into how sound affects taste. For example, high notes can enhance the taste of sourness, while low notes enhance our sense of bitterness. Vasilis recounts how initially the duo had experimented with more complex recorded music but decided to strip things down and use simple electronic sounds.

‘We thought, why don’t we take this to the absolute basic level, like subtractive synthesis?”

“Let’s start with sine waves, and tweak them to see how people respond. What do they associate with sweetness? What do they associate with sourness, and how do these translate in the raw tone? Then people can generally agree certain sounds are sour. From that, we try to combine these techniques to create more complicated timbres that represent more complicated aromas, until we work our way up to a bottle of wine.’

Asteris joins in on this theme ‘For example, the literature suggests that we tend to associate a sweet taste or aroma with consonant timbres, whereas saltiness corresponds to more staccato music, and bitterness is associated with lower frequencies and rough textures. Based on this, we knew if we wanted to make the sonic representation of a cherry aroma it needed to be both sweet and sour. So we decided we should combine a dissonant component to add some sourness and at the same time a concordant component to account for the sweetness’.

They tested these sounds on each other but also experimented with participants. Asteris describes their process ‘From our library of sounds we pick some and perform experiments in an academic lab environment, to either confirm or disprove our hypotheses. Our sound-aroma correspondence assumptions are proven right in some cases, but in other cases where participants don’t agree with our assumed association, we discard it and say

“Okay, we thought that sound would be a good representative for this scent but apparently it’s not.”’

I ask if anyone can try out pairing their music with wine. Vasilis is hesitant about this, pointing out that while they have a publicly available playlist on YouTube, using it as intended would require listeners to seek out specific bottles of wine. When I ask if these could be interchangeable with other bottles he draws a comparison with film music, stating that while you could theoretically change one film score for another, it likely would clash.

At this point, I feel my initial resistance giving way. Suddenly the thought of basking in the Greek sun listening to music and drinking wine feels much more appealing — maybe being a wine philistine is overrated. What I find refreshing about the duo is they are not overplaying the science, but appear to actually be having fun combining their talents to explore a new field between taste and music. It is not the cynical banalisation of music that Spotify often promotes, using playlists with names like ‘Music for your morning coffee’. Rather than treating the experience as an afterthought Soundpear is designing its music specifically for it.

However, one question still lingers — I ask how much they believe their work can carry across cultures. Asteris accepts that neither the effect of the music nor the taste of the wine can be considered universal experiences and their appeal is largely an audience drawn from cultures considered Western. It is an honest answer, and not surprising given that rarely does either music or drink genuinely appeal to global audiences anyway, especially given that alcohol is illegal or taboo throughout much of the world.

So, what of the music?

Vasilis composes with a certain mellifluous euphoria reminiscent at times of Boards of Canada and the film composer Michael Giacchino’s soundtrack for Inside Out, though with a more minimalist timbral palette than either. The tone and mood seem appropriate for accompanying a feeling of tipsiness and disinhibition. I even detect a subtle narrative structure that I assume accompanies the opening of the bottle, the initial taste, and the aftertaste. It is not hard to imagine the music working in the context of a tasting session, and people enjoying themselves.

Soundpear appears to be attempting to broaden how we combine our senses with the goal of opening people up to new experiences, which regardless of whether you are interested in wine or not is undoubtedly interesting. It is an invitation to multidisciplinary collaboration since the principles applied to wine could just as easily be applied to coffee, architecture, or natural landscapes. The attention they bring to multimodal perception makes one question whether music could be used in new ways, and that can only be a good thing.

Music Hackspace will host a workshop with Soundpear on Friday 22nd September 6pm UK

The sound of wine: transform your wine-tasting experiences through music-wine pairing

Soundpear are planning a music-wine pairing event at the Winemakers of Northern Greece Association headquarters in Thessaloniki this October – so stay tuned for more details!

Abstract Performance in Ableton and Max For Live – On demand

Level: Intermediate

Ableton and Cycling 74’s Max For Live offer a vast playground of opportunities to create unique and rich electronic music performances. In this workshop you will create a performance instrument. This workshop aims to provide you with suitable skills to begin exploring improvised performance in Ableton Live and Max For Live.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Discuss various forms of performance approaches in Ableton and Max For Live plus their advantages and weaknesses.

  • Explore one approach that gives most flexibility and ease for performing.

  • Gather assets in Ableton and Max For Live to use in the performance and how to fit them into our chosen approach.

  • Develop the approach into a complex performance orientated instrument based in the Ableton and Max For Live Environments

Session Study Topics

  • Deploy Ableton and Max For Live devices to create a musical performance.
  • Load and organize sounds into Ableton’s Drum Rack.
  • Enhance the performability of our instrument using midi processes.
  • Develop the approach using Ableton and Max For Live effects.

Requirements

  • A computer and internet connection

  • A good working knowledge of computer systems

  • A basic awareness of music theory and audio processing

  • Good familiarity with Ableton and Max For Live

  • Access to a copy of Ableton Live 10 Suite, or Ableton Live 10 with a Max For Live license.

  • A midi controller is desirable.

About the workshop leader

Ned Rush aka Duncan Wilson is a musician, producer and performer. He’s most likely known best for his YouTube channel, which features a rich and vast quantity of videos including tutorials, software development, visual art, sound design, internet comedy, and of course music.

Max meetup – US Edition 1

FREE

Date:  Saturday 23rd January – 3pm PST / 6pm EST

Level: Open to all levels 

Presenters

The meetup will be hosted by Chloe Alexandra Thompson and feature the following presentation

Francisco Botello  –  Wireless IMU Panning Controller
Tommy Martinez  –  Spatial Composition in the Frequency Domain
mutant forest  –  Controlling lights and sound in a performance with max4live
sydney christensen  –   Idea workshopping: analog synth controlled by MAX information
Chloe Thompson  –  Intro to ICST Ambisonics Package

Overview

Join the Max meetup to share ideas and learn with other artists, coders and performers. Showcase your patches, pair with others to learn together, get help for a school assignment, or discover new things.

The meetup runs via Zoom. The main session features short presentations from Max users. Breakout rooms are created on the spot on specific topics, and you can request a new topic at any time.

 In the breakout rooms, you can share your screen to show other participants something you’re working on, ask for help, or help someone else.

Ready to present your work?

Everyone is welcome to propose a presentation. Just fill in this short form and you’ll be put on the agenda on a first come first served basis.

Presentations should take no more than 5 minutes with 5 minutes Q&A and we’ll have up to 5 presentations at each meetup.

Topic suggestions but not limited to:

  • MIDI
  • Jitter
  • Signal processing
  • Sequencing
  • Hardware
  • OSC
  • Algorithmic composition
  • Package manager modules

Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

Build an interactive textile instrument

This practice-led course will show you how to make an electronic textile interface for music performance. We will learn a DIY technique to craft with e-textile materials and then explore how to make music with the handcrafted interface in a number of ways. Each session will follow on from the last, developing your knowledge through a series of hands-on projects, delivered in four online workshops. 

Level: beginner with notions of DIY electronics and programming

  • Some familiarity or experience of working with Arduino and/or Max/MSP (or similar platforms) is desirable
  • A tabletop space to work at
  • Computer, with USB port
  • Arduino IDE (Free – download here: https://www.arduino.cc/en/Main/Software)
  • Max 8 (Free 30 day trial available – you will be instructed to download this for the final session)

This workshop is available internationally. Please order your DIY kit before the dispatch date for your location. Kits will be posted using a Royal Mail tracked service.

UK dispatch date: Friday 17th November

Worldwide dispatch date: Friday 3rd November

We will work with the Lilypad Arduino, a microcontroller board designed for use with e-textiles and wearables projects, and Max/MSP, an object-orientated programming language for music making. The workshop series will cover the fundamentals of working with e-textiles and these technologies, giving a basis for participants to continue to develop their creative ideas when working with sound and interactive textiles.

Tues 24th Nov, 6pm UK –  Workshop 1: Crafting an e-textile interface

In this workshop, we will explore an approach to working with electronic textiles and handcraft. This workshop will introduce needle felting as a DIY method of working with e-textiles. We will make an interactive and touch sensitive textile interface, to then be used in a number of ways, throughout the four sessions of this course. Through crafting the brightly coloured interface, we will explore a creative approach to interface design and learn how traditional crafts can be combined with e-textile materials to result in novel interfaces for music performance.

Tues 1st Dec, 6pm UK – Workshop 2: Bringing your craft work to life: capacitive sensing and visualising sensor data with the Lilypad Arduino

In this session, we will transform the needle felted piece from Workshop 1 into an interactive and touch sensitive interface. We will introduce the Lilypad Arduino and explore capacitive sensing as a method of bringing your textile work to life. You will learn several approaches to visualising interaction data on screen, as well as the fundamentals of working with Arduino IDE.

Tues 8th Dec, 6pm UK – Workshop 3: Composing through code: making an e-textile step sequencer with the Lilypad Arduino

This week, we will develop our coding skills and learn an approach to using your e-textile interface with the Lilypad Arduino, as a standalone music making device. We will write, edit and compose through code, to create a playful step sequencer that makes music as you touch the textile interface. 

Tues 15th Dec, 6pm UK – Workshop 4: Interactive textiles and Max/MSP

Workshop 4 will introduce a method of using your handcrafted interface with Max/MSP. From this workshop, you will know how to program your Lilypad Arduino, to allow your e-textile interface to control parameters in a Max patch. We will make a software-based sampler, where pre-recorded sound files are triggered by touching the interactive textile interface. Some familiarity and a basic working knowledge of Max/MSP is desirable, but not essential. Participants with experience in Max are welcome to bring their own patches to experiment with.

A DIY kit, with all of the craft tools and materials you will need, is included in the workshop price and will be posted to your home in advance of the course.  

There are two kits available, please select the kit that you will require: 

Kit 1 is a full kit and includes a Lilypad Arduino and all of the craft tools and materials you will need for the course. 

Kit 2 includes all of the craft tools and materials you will need to make the e-textile interface, but does not include the Lilypad Arduino and USB cable. 

(Kit 2 is best suited if you already have a Lilypad Arduino or would prefer to use an alternative board. Please note that this course focuses on working with the Lilypad and so support for alternative boards will be limited and only recommended for more experienced participants.)

Kit 1 contents:

  • Lilypad Arduino
  • USB cable
  • 10 x crocodile clips
  • Speaker
  • Wool 
  • Steel wool
  • 3 x Needle felting tools 
  • Embroidery hoop
  • Fabric
  • Copper tape

Kit 2 contents:

  • 10 x crocodile clips
  • Speaker
  • Wool 
  • Steel wool
  • 3 x Needle felting tools 
  • Embroidery hoop
  • Fabric
  • Copper tape

Build a web assembly synthesiser with iPlug 2

Learn to use iPlug2 C++ audio plugin framework to create a synthesiser that runs on the web.

iPlug2 is a new C++ framework that allows you to build cross-platform audio plug-ins, using minimal code. One of the exciting features of iPlug2 is that it lets you turn your plug-in into a web page that anyone can use without a DAW (see for example https://virtualcz.io). In this workshop participants will learn how to build a web based synthesiser using cloud based tools, and publish it to a GitHub pages website. We will look at some basic DSP in order to customise the sound of the synthesiser and we will also customise the user interface. The same project builds native audio plug-ins, although in the workshop we will focus on the web version.

Note from Oli: Even though the workshop might use lots of unfamiliar technologies, iPlug2 is designed to be simple to use and has many of the more confusing aspects of cross platform programming solved for you already. Don’t worry if the technology sounds scary, everyone should be able to build a custom synthesiser using the example projects and workflow.

Requirements

Useful links


About the workshop leader

Oli Larkin is an audio software developer and music technologist with over 15 years of experience developing plug-ins and plug-in frameworks. He has released his own software products and has collaborated with companies such as Roli, Arturia, Focusrite and Ableton. For many years he worked in academia, supporting audio research and sound art projects with his programming skills. Nowadays Oli is working as a freelancer, as well as focusing on his open source projects such as iPlug2

Learn to program amazing interactive particles systems with Jitter

In this workshop, you will learn to build incredible live videos with particles systems, using Max and Jitter.

Cycling’74 has recently released GL3, which ties together more closely Jitter with Open GL, and optimises use of the GPU. With this recent update available in the package manager, you can build highly performance videos without having to code them in C++.

Requirements

  • Latest version of Max 8 installed on Mac or Windows
  • A good working knowledge of Max is expected
  • Understanding of how the GEN environment works in Jitter
  • Some familiarity with textual programming languages
  • A knowledge of basic calculus is a bonus
  • The GL3 package installed
  • To install this package open the “Package Manager” from within Max, look for the GL3 package and click “install”.

What you will learn

Session 1, 20th October, 6pm UK / 10am PDT / 1pm EST:

– Introduction to GL3 features

– Quick overview of most of the examples in the GL3 package

– Build a simple particle system from scratch

– Explorations with gravity/wind

– Exploration with target attraction

Session 2, 27th October, 6pm UK / 10am PDT / 1pm EST:

– Improve particle system with rendering billboard shader

– Creation of a “snow” or “falling leaves” like effect

– Starting to introduce interactivity in the system

– Using the camera input

– Connecting sound to your patches

Session 3, 3rd November, 6pm UK / 10am PDT / 1pm EST:

– Improve the system interactivity

– Particles emitting from object/person outline taken from camera

– Create a particle system using 3D models and the instancing technique

– Transforming an image or a video stream into particles

Session 4, 10th November, 6pm UK / 10am PDT / 1pm EST:

– Introduction to flocking behaviours and how to achieve them in GL3

– Create a 3D generative landscape and modify it using the techniques from previous sessions

– Apply post-processing effects


About the workshop leader:

Federico Foderaro is an audiovisual composer, teacher and designer for interactive multimedia installations, author of the YouTube channel Amazing Max Stuff.
Graduated in Electroacoustic Musical Composition at the Licinio Refice Conservatory in Frosinone cum laude, he has lived and worked in Berlin since 2016.

His main interest is the creation of audiovisual works and fragments, where the technical research is deeply linked with the artistic output.
The main tool used in his production is the software Max/MSP from Cycling74, which allows for real-time programming and execution of both audio and video, and represents a perfect mix between problem-solving and artistic expression.

Beside his artistic work, Federico teaches the software Max/MSP, both online and in workshops in different venues. The creation of commercial audio-visual interactive installations is also a big part of his work life, having led in the years to satisfactory collaborations and professional achievements.

Artist workshop with Ned Rush: Live Sample Mangling in Max 8 – On demand

Max is Ned’s go to environment to realise concepts for sound design and performance that are not available in other programs.

In this 2-hour workshop you will learn ways to sample and loop incoming audio from the outside world. You will create a fresh sonic palette from mutating the sound, using a variety of techniques aimed at performance and improvisation, whilst also discussing and solving problems related to improvisation set-ups and how we can meet those needs.

You will explore a variety of ways to interact with sampled sound to find which method suits you best so you can realise your vision with a unique performance sampler.

Requirements

– Max 8

– Basic knowledge of Max

About the workshop leader

Ned Rush aka Duncan Wilson is a musician, producer and performer. He’s most likely known best for his YouTube channel, which features a rich and vast quantity of videos including tutorials, software development, visual art, sound design, internet comedy, and of course music.

Arcologies: a workshop for Monome norns & grid / On-demand

For Monome norns and grid, arcologies is a 21st century instrument for musical composition and discovery. Built by Tyler as a “2020 pandemic sanity project” and released in September it has already attracted passionate following.

Through a series of “breakout-room” team challenges you will learn how to build and sculpt evolving sound compositions with Arcologies.

We’ll cover signal flow, melodies, chords, and evolving systems with modulation, euclidean rhythms, and Turing machines.

Topics

  • Electronic music composition techniques.
  • Generative music.
  • monome norns
  • monome grid

Requirements

About the workshop leader

Tyler Etters is a polymath-artist currently residing in Los Angeles. His uniquely 21st century practice encompasses a range of mediums including music, film, analog photography, and software design. He is Vice President at Highland and received his BFA in Graphic Design from Columbia College Chicago.

Links

https://tyleretters.github.io/arcologies-docs/

https://nor.the-rn.info

About
Privacy