Getting started with Max – September Series

Dates & Times: 

Session 1: Wednesday 15th September at 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

Session 2: Wednesday 22nd September at 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

Session 3: Wednesday 29th September at 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

Session 4: Wednesday 6th October at 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

Level: Beginners curious about programming

Get started with interactive audio and MIDI, and discover the possibilities of the Max environment. In this series of workshops, you will learn how to manipulate audio, MIDI, virtual instruments and program your own interactive canvas.

Connect together Max’s building blocks to create unexpected results, and use them in your music productions. Through a series of guided exercises you will engage in the pragmatic creation of a basic MIDI sequencer device that features a wealth of musical manipulation options.

Learn from guided examples and live interactions with teachers and other participants.

This series of online workshops aims to enable you to work with Max confidently on your own.

Sessions overview 

Session 1 – Understand the Max environment

Session 2 – Connect building blocks together and work with data

Session 3 – Master the user interface

Session 4 – Work with your MIDI instruments

Requirements

    • A computer and internet connection
    • A good working knowledge of computer systems
    • Access to a copy of Max 8

About the workshop leader 

Phelan Kane is a Berlin & London based music producer, engineer, artist, developer and educator. For over twenty years he has been active in both the music industry and the contemporary music education sector, with a focus on electronic music and alternative bands.

He specialises in sound design and production techniques such as synthesis and sampling, alongside audio processing and plug-in development.

He is currently running the electronic music record label Meta Junction Recordings and the audio software development company Meta Function, which specialize in Max for Live devices releasing the M4L synth Wave Junction in partnership with Sonicstate.

Max meetup – August 14th

Date & Time: Saturday 14th August – 4pm UK / 5pm Berlin / 8am LA / 11am NYC

Meetup length 2-hours 

Level: Open to all levels

Meetups are a great way to meet and be inspired by the Max community.

What to expect? 

The meetup runs via Zoom and will be approx. 2-hours in length.

This session will feature presentations from expert practitioners.

Following these presentations breakout rooms are created where you can: 

  • Talk to the presenters and ask questions

  • Join a room on topics of your choice

  • Show other participants your projects, ask for help, or help others out

  • Meet peers in the chill-out breakout room

The list of presenters will be updated and announced before the meetup. 

Requirements

  • A computer and internet connection
  • A Zoom account

Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

Supported by Cycling ‘74

Build Max for Live devices using Ableton Live’s API / On-demand

Level: Intermediate

In this series of workshops you will explore concepts and techniques associated with Ableton Live’s API and the Live Object Model (LOM). The LOM provides a detailed level of control of Live via M4L and it can be used to underpin unique and novel M4L devices. These workshops aim to expand your knowledge and use of the Live API and the LOM within the M4L development environment, which can be leveraged by M4L developers to enhance their practice and provide unprecedented control of Live.

Series Learning Outcomes

By the end of this series a successful student will be able to:

  • Identify the LOM structure, LOM paths and LOM Object ids

  • Utilise API Object types, Classes, Children, Properties and Functions

  • Deploy datatypes, debugging, notifications and javascript with the Live API

  • Observe and control Live parameters via the API and M4L

Session 1: The Live Object Model Pt.1 

  • Live Objects (hierarchy, properties, functions)

  • Object Paths

  • Root Objects

  • Max Objects (live.path, live. Object)

Session 2: The Live Object Model Pt.2 

  • Max Objects (live.remote~, live.observer)

  • Controlling Ableton Live parameters

  • Observing Ableton Live

Session 3: Creating a Max for Live device with the Live API 

  • Work with Control Surfaces

  • Route MIDI / audio

  • Practical examples of API use

Session 4: JavaScript

  • The LiveAPI Object in JS

  • Summary of course

Requirements

  • A computer and internet connection

  • Access to a copy of Live Suite (preferably Live Suite 11) trial or full licence.

About the workshop leader

Mark Towers is an Ableton Certified Trainer and a lecturer in music technology at Leicester College. He specialises in Max for Live, as well as working with Isotonik Studios to create unique and creative devices for music production and performance such as the Arcade Series.

Max meetup – July 24th

Date & Time: Saturday 24th July 4pm UK / 5pm Berlin / 8am LA / 11am NYC

Level: Open to all levels

Hosted by Melody Loveless.

Meetups are a great way to meet and be inspired by the Max community.

What to expect? 

The meetup runs via Zoom and will be approx. 2-hours in length.

This session will feature presentations from 3 expert practitioners: 

Viola Yip

Daniel McKemie

Virginia de las Pozas: Axine M

Following these presentations breakout rooms are created where you can:

  • Talk to the presenters and ask questions

  • Join a room on topics of your choice

  • Show other participants your projects, ask for help, or help others out

  • Meet peers in the chill-out breakout room

Requirements: 

  • A computer and internet connection
  • A Zoom account

 Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

Supported by Cycling ‘74

TouchDesigner meetup – LIVE Session / 31st July

Date & Time: Saturday 31st July 4pm UK / 5pm Berlin / 8am LA / 11am NYC

Level: Open to all levels

Meetups are a great way to meet and be inspired by the TouchDesigner community.

What to expect? 

The meetup runs via Zoom, the main session will be 2-hours in length with an additional hour open to the community for collaboration and sharing in breakout rooms.

This session focuses on Shaders and will feature presentations from TouchDesigner experts: 

Josef Luis Pelz – Real-time cloth simulation in TD with GLSL

We’ll have a look at a versatile real-time cloth simulation. Besides showing some example results, I’ll try to briefly explain how the system works and how I developed it step by step. It involves pixel, vertex and compute shader.

Josef is a creative coder, generative art enthusiast and mathematician living and working in Berlin. With a background in mathematics and computer science, he is linking his passion for creative problem solving and aesthetics.

For more info check out: instagram.com/josefluispelz/ https://twitter.com/JosefPelz https://josefluispelz.com/

Louise Lessél – Shader conversions from TouchDesigner to Raspberry Pi

Louise will present an asset she created that allows you to quickly convert Shadertoy shaders to use in your Touchdesigner projects, and to go one step further and use the shader in your Raspberry Pi projects by using Pi3D, to run either screens or LED hub-75 matrixes. The asset is available in the Touchdesigner community assets.

Louise Lessél is a Danish New Media artist and Creative Technologist based in New York. She creates digital projections and interactive light installations based on scientific facts and data input, often exploring the limits of the human perceptual system or raising ecological awareness.

For more info checkout: Instagram: @louiselessel

Torin Blankensmith & Peter Whidden –  Interactive 3D Shaders with Shader Park and TouchDesigner

Torin and Peter will be showcasing their new plugin which allows you to use Shader Park within TouchDesigner to quickly script interactive shaders. Explore 3D shader programming through a Javascript interface without the complexity of GLSL. Shader Park is an open source project for creating real-time graphics and procedural animations. Follow along through multiple examples using a live code editor. Expand upon the examples and bring them into TouchDesigner to create your own interactive graphics. Browse the Shader Park community’s gallery where you can fork other people’s creations or feature your own.

Torin is a freelance creative technologist, teacher, and real-time graphics artist focusing on mixed reality installations and interactive experiences. He currently works at Studio Elsewhere creating restorative immersive environments in collaboration with neuroscientists focused on patient and medical worker’s well being.

Peter is a creative software engineer whose work spans physics, astronomy, machine learning, and computer graphics. He currently works at the NY Times R&D lab focused on emerging computer vision and graphics techniques.

For more info checkout: Instagram: @blankensmithing, @peterwhidden / Twitter: @tblankensmith

 

Following these presentations breakout rooms are created where you can: 

  • Talk to the presenters and ask questions
  • Join a room on topics of your choice
  • Show other participants your projects, ask for help, or help others out
  • Collaborate with others 
  • Meet peers in the chill-out breakout room

 

Requirements 

  • A computer and internet connection

  • A Zoom account

 

Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

 Supported by

 

Splatterbox – On-demand

Level: Intermediate

MaxMSP allows us to go far beyond the stompbox looper. In this workshop, discover how working with simple looping  combinations can add up to a rich and layered soundscape. The beauty then lies in the retrospective operations we can apply to the loops with this patch for a very personal and unique outcome.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Build a custom loop station

  • Explore post processing and loop manipulations for unique soundscapes and performances

  • Build loops and samples in real time

Session Study Topics

  • [Groove ~] many attributes to create contrasting loops and to avoid audio pops

  • [Waveform] and [jsui] objects to control and manipulate samples

  • [timer] to control recording durations and targets

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a copy of Max 8 (i.e. trial or full license)

About the workshop leader 

James Wilkie holds a Bmus in Film Scoring at Berklee College of Music and an MMus in Sonic Arts from Goldsmiths. This grounding continues to offer James a strong combination of concept and craft.

Storytelling lives at the heart of James’ work as he works with space and the imagination in sound through the creation of ambient A/V compositions, text, and live performances exploring the impact of technology on the imagination and community.

Ample samples – Introduction to SuperCollider for monome norns – On-demand

Level: Beginner (many fundamentals of SuperCollider will be covered)

SuperCollider is an amazing open-source audio synthesis and composition tool.

In this workshop we will focus on sampling and learn how to use SuperCollider to make a fully featured sample player + looper which can be used for triggering drum kits, sequencing partitions, or a chaotic breakbeat style system. We will also learn the basics of applying effects, and finally how to integrate the resulting SuperCollider code into a new script for monome norns.

Summary

  • Learn how to use SuperCollider,

  • Learn about sample playback in SuperCollider

  • Become familiar with sample playback and effect elements of SuperCollider

  • Integrate a playback engine into monome norns

By the end of this workshop, you will be able to:

  • Learn how to use SuperCollider starting from the basics

  • Understand SuperCollider UGens for sample playback and effects

  • Write SuperCollider code that can apply effects to sounds

  • Play SuperCollider samples in a monome norns script

Session Study Topics

  • Understanding UGens for sample playback

  • Develop SuperCollider code for effective sample playback

  • Use UGens to apply effects to samples

  • Norns integration with midi and samples

We have a number of sponsorship places,if the registration fee is a barrier to you joining the workshop please contact laura@stagingmhs.local.

 

Requirements

About the workshop leader

Zack Scholl is a Seattle, Washington based tinkerer who releases music and norns scripts as “infinite digits”.

He has been programming for 12 years as part of his job developing instrumentation and conducting experiments to understand biophysical properties of human proteins.

Creative Riff Composition with MIDI – On-demand

Level: Beginner

The riff by nature is repetitive so you get it, again and again, you get it reinforced and the rest of the song is built around it like the riff was the skeleton of the song.

This workshop aims to provide you with the necessary abilities to begin composing riffs and arranging a composition around such an important musical element.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Apply critical listening skills to riff & recurring motifs.

  • Extrapolate core musical qualities of a riff.

  • Construct a riff within a selected musical genre.

  • Apply arrangement techniques around a riff within a track.

Session Study Topics

  • MIDI programming

  • Rhythmic subdivision and polymeter

  • Micro fills and macro fills

  • Layering and subtractive arrangement techniques

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a copy of Live Suite or Standard (i.e. trial or full license)

About the workshop leader: 

Simone Tanda is a musician, producer, multi-media artist, tech consultant, and educator.

Based across London & Berlin he is currently creating music for his own project, as well as multidisciplinary artists, film, and commercials.

Getting started with Interactive Machine Learning for openFrameworks – On-demand

Level: Intermediate – C++ required

Using openFrameworks, ofxRapidLib and ofxMaximilian, participants will learn how to integrate machine learning into generative applications. You will learn about the interactive machine learning workflow and how to implement classification, regression and gestural recognition algorithms.

You will  explore a static classification approach that employs the k-Nearest Neighbour (KNN) algorithm to categorise data into discrete classes. This will be followed by an exploration of static regression problems that will use multilayer perceptron neural networks to perform feed-forward, non-linear regression on a continuous data source. You will also explore an approach to temporal classification using dynamic time warping which allows you to analyse and process gestural input

This knowledge will allow you to build your own complex interactive artworks.

By the end of this series the participant will be able to:

Overall:

  • Set up an openFrameworks project for machine learning

  • Describe the interactive machine learning workflow

  • Identify the appropriate contexts in which to implement different algorithms

  • Build interactive applications based on classification, regression and gestural recognition algorithms

Session 1:

  • Set up an openFrameworks project for classification

  • Collect and label data

  • Use the data to control audio output

  • Observe output and evaluate model

Session 2:

  • Set up an openFrameworks project for regression

  • Collect data and train a neural network

  • Use the neural network output to control audio parameters

  • Adjust inputs to refine the output behaviour

Session 3:

  • Set up an openFrameworks project for series classification

  • Design gestures as control data

  • Use classification of gestures to control audio output

  • Refine gestural input to attain desired output

Session 4:

  • Explore methods for increasing complexity

  • Integrate visuals for multimodal output

  • Build mapping layers

  • Use models in parallel and series

Session Study Topics

Session 1:

  • Supervised Static Classification

  • Data Collection and Labelling

  • Classification Implementation

  • Model Evaluation

Session 2:

  • Supervised Static Regression

  • Data Collection and Training

  • Regression Implementation

  • Model Evaluation

Session 3:

  • Supervised Series Classification

  • Gestural Recognition

  • Dynamic Time Warp Implementation

  • Model Evaluation

Session 4:

  • Data Sources

  • Multimodal Integration

  • Mapping Techniques

  • Model Systems

Requirements

  • A computer with internet connection

  • Installed versions of the following software:

    • openFrameworks

    • ofxRapidLib

    • ofxMaxim

  • Preferred IDE (eg. XCode / Visual Studio)

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in using machine learning to create audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He is close to completion of his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Getting started with MIDI 2.0 development – On-demand

If you’re looking to book multiple tickets, please contact us for an invoice at info@stagingmhs.local

Level: Intermediate, Some experience with C++ coding required, Experience with JUCE recommended

To make the most of this on-demand workshop, participants should have experience building and debugging applications using Xcode (macOS) and Visual Studio (Windows).

Who is this course for:

Developers wanting to learn how MIDI 2.0 works under the hood, and how to get started writing software for it right away

Overview of what participants will learn:

This course will provide developers with knowledge and code for starting MIDI 2.0 development. At first, the concepts of MIDI 2.0 are explained. Then, the participants will co-develop a first implementation of a MIDI-CI parser for robust device discovery, and for querying and offering profiles. For that, a stub workspace will be provided. Exercises will let the participants practice the newly learned concepts. Last, but not least, this course also includes automated testing as a tool to verify the implementation.

Part 1: Overview of MIDI 2, concepts

  • MIDI-CI, Profiles, protocol negotiation, PE, UMP
  • Concepts
  • Tools
  • MIDI-CI Message Layout

Part 2: Workspace setup, Basic MIDI 2.0 Discovery

  • Workspace setup
  • Starting with a unit test
  • Implementing a MIDI 2.0 message parser
  • Implement MIDI 2.0 discovery

Part 3: Advanced MIDI 2.0 discovery and tests

  • making the parser more robust
  • MUID collision handling
  • Multi-port and MIDI Thru issues
  • unit tests + implementation

Part 4: Implementing Profiles. Outlook PE and UMP.

  • Use Cases
  • Sending and receiving Profile messages
  • Implementation and tests
  • Quick introduction to PE and to UMP

 

At the end of the course series, the participants will:

  • Know the core concepts of MIDI 2.0
  • Understand the MIDI 2.0 discovery protocol
  • Be able to build products with MIDI 2.0 discovery
  • Be able to build products using MIDI 2.0 Profiles
  • Use an initial set of MIDI 2.0 unit tests

Requirements

A computer and internet connection

Xcode (macOS)/Visual Studio (Windows)

JUCE workspace

About the course leaders

Brett Porter is Lead Software Engineer at Artiphon, member of the MIDI Association Executive Board, and chair of the MIDI 2 Prototyping and Testing Working Group. He is based in the New York City area.

Florian Bomers runs his own company Bome Software, creating MIDI tools and hardware. He has been an active MIDI 2.0 working group member since its inception. He serves on the Technical Standards Board of the MIDI Association and chairs the MIDI 2.0 Transports Working Group. He is based in Munich, Germany.

About
Privacy