Fundamentals of sound design with Pigments 3

Date & Time: Thursday 7th October – 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

Length: 2-hours live workshop via Zoom

Level: Beginner to intermediate

Arturia’s Pigments 3 Virtual Instrument is a highly versatile tool which is used in many professional studios, it also presents many opportunities for deep learning and creation at a beginner level.

Through this workshop that is well based in theory, but prioritizing practice, play and investigation, students will work to build new presets from scratch and learn how to manipulate existing sounds. We will utilize built-in Analog, Wavetable, Harmony, and Sample / Granular based sonic elements across Pigments 3 sound engines. We will also work with controlling synthesis settings with ADSR envelopes, using Macro’s, EQ’s and more.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Build custom presets for sound design and music
  • Explore Pigments 3 creatively and independently
  • Be empowered by the built-in Pigments 3 troubleshooting and tutorial tools.

Session Study Topics

  • Intro – in app resources – demos
  • Virtual Synthesis / Digital Signal Processing
  • Granular Synthesis
  • Dynamic uses of macro control
  • Creative uses of EQ

Requirements:

About the workshop leader

Chloe Alexandra Thompson is a Cree, Canadian interdisciplinary artist and touring sound designer whose artistic works and workshops have been featured in galleries and performance spaces domestically and internationally. Using audio programming softwares, Thompson creates unique sonic environments and interactive performance tools.

She is presently based in Brooklyn, NY, USA.

Livestream: Nestup – A Language for Musical Rhythms

Date & Time: Monday 10th May 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

In this livestreamed interview, we will speak with Sam Tarakajian and Alex Van Gils, who’ve built a fantastic live-coding environment that works within an Ableton Live device called Nestup

The programs we use to make music have a lot of implicit decisions baked into them, especially in their graphical interfaces. Nestup began as a thought experiment, trying to see if embedding a text editor inside Live could open up new creative possibilities. We think the answer is that yes, text can work well alongside a piano roll and a traditional musical score, as a concise and expressive way to define complex rhythms.

With Nestup, you define for yourself any size of rhythmic unit, any sort of rhythmic subdivision, and with any scaling factor. These language features open your rhythm programming up to musical ideas such as metric modulation, nested tuplets, complex polyrhythm, and more. Rhythms from musical styles which would have been prohibitively difficult to program in a DAW can therefore be rendered in MIDI, such as rhythms from Armenian folk musics or “new complexity” compositions.

Overview of speakers

Sam is a Brooklyn based developer and creative coder. Sam works for Cycling ‘74 and develops independent projects at Cutelab NYC. Alex is a composer, performer, and generative video artist based in Brooklyn. 

Sam and Alex have been making art with music and code together for over 10 years, beginning with a composition for double bass and Nintendo Wiimote while undergraduates and continuing to include electroacoustic compositions, live AR performance art, installation art, Max4Live devices, and now Nestup, the domain-specific language for musical rhythms.

Where to watch?

YouTube –

 

Exploring gesture control with Gliss & Glover – On demand

Level: Beginner with some music software experience

Computers these days give us the power to create almost any sound imaginable, so now we have to question ourselves – how do we interact with our computers physically? In this workshop you will explore what it’s like to control those sounds and effects with your movement by learning about Glover – a mapping application for converting movement to MIDI or OSC.

Session Study Topics

  • Mapping movement to MIDI in Glover

  • Training postures in Gliss

  • Mapping from Glover to a DAW

  • Considering what “good” movements are to control sound 

Requirements

  • Download the free Gliss app from the Apple app store: iOs only

  • Download free trial of Glover via mimugloves.com/glover

  • A DAW or a piece of hardware that can receive MIDI or OSC – Ableton Live recommended as Chagall will be most proficient at supporting you with that but others work too

  • A microphone routed into your computer if you’d like to experiment with manipulating your voice with Gliss

More info: https://mimugloves.com/gliss/

About the workshop leader

Chagall is an Amsterdam-based singer, producer and performer known for her use of the MiMU Gloves to control music & reactive visuals. With performances at South by Southwest, Ableton Loop, TEDx and many more Chagall is one of the most experienced users of the technology. She is also the UX designer for MiMU’s Glover & Gliss.

Melody Generation in Max – On demand

Level: Intermediate

The importance of the melody in traditional musical composition is difficult to understate. Often one of the first components the ear latches onto, being able to write a good melody is something of an artform. Producing basic algorithmically-generated melodies using Max/MSP is quite easy, but in order to produce something more ‘musical’ we must refine the generation process.

In this workshop you will learn some ways of generating more complex melodies in Max. This will involve implementing occasional phrase repeats to balance predictability and surprise, locking in some of the more important rhythmic elements and incorporating planned octave jumps alongside more restricted pitch-based travel.

By the end of the workshop you will have constructed a melody generation patch that can be set to play along with your compositions, with a greater understanding of some of the ways in which we can sculpt melody in Max.

Topics

    • Max/MSP
    • Algorithmic Composition
    • Melody

Requirements

  • You should be comfortable with the general workflow and data formatting in Max.

  • Knowledge of MIDI format and routing to DAWs (Ableton, Logic etc) would be a plus, although Max instruments will be provided.

  • You should have some basic knowledge of music theory: chords, scales, modes etc.

About the workshop leader 

Samuel Pearce-Davies is a composer, performer, music programmer and Max hacker living in Cornwall, UK.

With a classical music background, it was his introduction to Max/MSP during undergraduate studies at Falmouth University that sparked Sam’s passion for music programming and algorithmic composition.

Going on to complete a Research Masters in computer music, Sam is now studying a PhD at Plymouth University in music-focused AI.

An Introduction to Markov Chains: Machine Learning in Max/MSP

Difficulty level: Beginner

Overview

Markov chains are mathematical models that have existed in various forms since the 19th century, which have been used to aid statistical modelling in many real-world contexts, from economics to cruise control in cars. Composers have also found musical uses for Markov Chains, although the implied mathematical knowledge needed to implement them often appears daunting.

In this workshop we will demystify the Markov Chain and make use of the popular ml.star library in Max/MSP to implement Markov Chains for musical composition. This will involve preparing and playing MIDI files into the system (as a form of Machine Learning) and capturing the subsequent output as new MIDI files. By the end of the session you will have the knowledge of how to incorporate Markov Chains into your future compositions at various levels.

Topics

  • Max
  • Markov Chains
  • Machine Learning
  • Algorithmic Composition

Requirements 

  • You should have a basic understanding of the Max workflow and different data types.
  • Knowledge of MIDI format and routing to DAWs (Ableton, Logic etc) would be a plus, although Max instruments will be provided.
  • No prior knowledge of advanced mathematical or machine learning concepts are necessary, the focus will be on musical application.

About the workshop leader

Samuel Pearce-Davies is a composer, performer, music programmer and Max hacker living in Cornwall, UK.

With a classical music background, it was his introduction to Max/MSP during undergraduate studies at Falmouth University that sparked Sam’s passion for music programming and algorithmic composition.

Going on to complete a Research Masters in computer music, Sam is now studying a PhD at Plymouth University in music-focused AI.

Getting Started with Max For Live – On demand

Difficulty level: Beginner

In this series of workshops you will explore the Max For Live (M4L) ecosystem, empowering you to utilise them in your own music.

Following these workshops you’ll be able build your own devices in the Max For Live environment!

Ableton Live Suite is a powerful and creative DAW.

Max For Live extends the vast range of creative opportunities that Live offers, allowing you to add third party devices or to create your own unique devices.

Session 1 Learning Outcomes

By the end of this session you will be able to:

  • Become familiar with the M4L landscape

  • Explore pre-built M4L devices that come with Live Suite

  • Locate and utilise M4L tutorials that come with Live Suite

  • Identify third party M4L content

Session 2 Learning Outcomes

By the end of this session you will be able to:

  • Create objects and route patch cables

  • Construct user interfaces in M4L

  • Build MIDI step sequencers in M4L

  • Explore further possibilities within Max For Live

Requirements

  • A computer and internet connection

  • A good working knowledge of computer systems

  • A basic awareness of Ableton Live

  • Access to a copy of Ableton Live Suite (i.e. with Max For Live) (i.e. trial or full license)

About the workshop leader

Phelan Kane is a Berlin & London based music producer, engineer, artist, developer and educator.

For over twenty years he has been active in both the music industry and the contemporary music education sector, with a focus on electronic music and alternative bands. His specialism is sound design and production techniques such as synthesis and sampling, alongside audio processing and plug-in development.

His credits include collaborations with Placebo, Radiohead, Fad Gadget, Depeche Mode, Moby, Snow Patrol, Mute, Sony BMG, Universal, EMI and Warner Bros.

He holds an MA in Audio Technology from the London College of Music, University of West London, an MSc in Sound & Music Computing at the Center for Digital Music at Queen Mary, University of London and in 2008 became one of the world’s first wave of Ableton Certified Trainers.

He is a member of the UK’s Music Producers Guild, holds a PG Cert in Learning & Teaching, is an Affiliate of the Institute for Learning, a Fellow of the Higher Education Academy and until recently was Chairman of the London Committee for the British Section of the Audio Engineering Society.

He is currently running the electronic music record label Meta Junction Recordings and the audio software development company Meta Function, which specialize in Max for Live devices releasing the M4L synth Wave Junction in partnership with Sonicstate.

Algorithmic Composition in Max: Bringing Order to Chaos

Learn to construct music-generating algorithms in Max, to compose semi-autonomously or supplement your compositional practice.

Level: Intermediate 

Composing with randomness

For centuries, musicians have incorporated chance-based elements into their compositions, first through coin flips and dice rolls and more recently through computer software. Today, building music-oriented algorithmic systems is easier than ever with Max.

What you will learn

In this workshop you will learn a variety of algorithmic processes and useful tools to construct your own systems: including drunken walks, list manipulation and step-sequencer pattern generation. Primarily focusing on MIDI-controlled instruments, you will gain an understanding of how chance can be factored into numerous aspects of composition, from melody and harmony to overall piece structure and instrumentation.

By the end of the workshop you will have built a system for algorithmically generating a short multi-instrumental composition which you will be able to go on to improve and expand upon to fit your own preferences.

Requirements

  • You should be comfortable with the general workflow and data formatting in Max.
  • Knowledge of MIDI format and routing to DAWs (Ableton, Logic etc) would be a plus, although Max instruments will be provided.
  • You should have some basic knowledge of music theory: chords, scales, modes etc.

About the workshop leader

Samuel Pearce-Davies is a composer, performer, music programmer and Max hacker living in Cornwall, UK.

With a classical music background, it was his introduction to Max during undergraduate studies at Falmouth University that sparked Sam’s passion for music programming and algorithmic composition.

Going on to complete a Research Masters in computer music, Sam is now studying a PhD at Plymouth University in music-focused AI.

Website

YouTube channel

Build a web assembly synthesiser with iPlug 2

Learn to use iPlug2 C++ audio plugin framework to create a synthesiser that runs on the web.

iPlug2 is a new C++ framework that allows you to build cross-platform audio plug-ins, using minimal code. One of the exciting features of iPlug2 is that it lets you turn your plug-in into a web page that anyone can use without a DAW (see for example https://virtualcz.io). In this workshop participants will learn how to build a web based synthesiser using cloud based tools, and publish it to a GitHub pages website. We will look at some basic DSP in order to customise the sound of the synthesiser and we will also customise the user interface. The same project builds native audio plug-ins, although in the workshop we will focus on the web version.

Note from Oli: Even though the workshop might use lots of unfamiliar technologies, iPlug2 is designed to be simple to use and has many of the more confusing aspects of cross platform programming solved for you already. Don’t worry if the technology sounds scary, everyone should be able to build a custom synthesiser using the example projects and workflow.

Requirements

Useful links


About the workshop leader

Oli Larkin is an audio software developer and music technologist with over 15 years of experience developing plug-ins and plug-in frameworks. He has released his own software products and has collaborated with companies such as Roli, Arturia, Focusrite and Ableton. For many years he worked in academia, supporting audio research and sound art projects with his programming skills. Nowadays Oli is working as a freelancer, as well as focusing on his open source projects such as iPlug2

Getting started with MSP

In this series of 23 videos you will explore fundamental sound generation and synthesis techniques and concepts when working with Max, empowering you to begin to build your own synthesis patches and devices that you can deploy in your own music and multimedia projects.

Through a series of guided exercises you will engage in the pragmatic creation of a basic synthesis device that features a wealth of sound manipulation options. This series of workshops aims to provide intermediate Max users with suitable skills to deploy audio DSP and synthesis skills within the Max environment.

Requirements

  • A computer and internet connection
  • A good working knowledge of computer systems
  • Intermediate skills working with Max (i.e. ability to construct basic patches, familiarity with Max workflows, understanding of signal flow, use of messages and lists, creation of objects and adaptation of their properties etc).
  • Some familiarity with music creation applications such as a DAW
  • Access to a copy of Max 8 (i.e. trial or full license)

Session 1 Learning Outcomes

By the end of this session a successful student will be able to:

  • Identify key elements of the MSP domain
  • Create MSP objects and route patch cables
  • Compare and contrast possibilities offered by objects within the MSP environment
  • Locate and utilise the Max help & Reference system

Session 2 Learning Outcomes

By the end of this session a successful student will be able to:

  • Construct MIDI signal routing
  • Deploy MSP oscillators & filter objects
  • Build envelope generators for synthesis devices
  • Route and sum signal flow

Session 3 Learning Outcomes

By the end of this session a successful student will be able to:

  • Build multi-function LFOs
  • Configure modulation routing within synthesis devices
  • Utilise BPatchers within patches
  • Successfully apply data management techniques

Session 4 Learning Outcomes

By the end of this session a successful student will be able to:

  • Construct and deploy GUI designs
  • Utilise presets within Max / MSP patches
  • Transform MSP patches into M4L or standalone devices
  • Explore further possibilities within MSP

About the workshop leader

Phelan Kane is a Berlin & London based music producer, engineer, artist, developer and educator. For over twenty years he has been active in both the music industry and the contemporary music education sector, with a focus on electronic music and alternative bands. His specialism is sound design and production techniques such as synthesis and sampling, alongside audio processing and plug-in development. His credits include collaborations with Placebo, Radiohead, Fad Gadget, Depeche Mode, Moby, Snow Patrol, Mute, Sony BMG, Universal, EMI and Warner Bros. He holds an MA in Audio Technology from the London College of Music, University of West London, an MSc in Sound & Music Computing at the Center for Digital Music at Queen Mary, University of London and in 2008 became one of the world’s first wave of Ableton Certified Trainers. He is a member of the UK’s Music Producers Guild, holds a PG Cert in Learning & Teaching, is an Affiliate of the Institute for Learning, a Fellow of the Higher Education Academy

About
Privacy