Getting started with Interactive Machine Learning for openFrameworks – On-demand

Level: Intermediate – C++ required

Using openFrameworks, ofxRapidLib and ofxMaximilian, participants will learn how to integrate machine learning into generative applications. You will learn about the interactive machine learning workflow and how to implement classification, regression and gestural recognition algorithms.

You will  explore a static classification approach that employs the k-Nearest Neighbour (KNN) algorithm to categorise data into discrete classes. This will be followed by an exploration of static regression problems that will use multilayer perceptron neural networks to perform feed-forward, non-linear regression on a continuous data source. You will also explore an approach to temporal classification using dynamic time warping which allows you to analyse and process gestural input

This knowledge will allow you to build your own complex interactive artworks.

By the end of this series the participant will be able to:

Overall:

  • Set up an openFrameworks project for machine learning

  • Describe the interactive machine learning workflow

  • Identify the appropriate contexts in which to implement different algorithms

  • Build interactive applications based on classification, regression and gestural recognition algorithms

Session 1:

  • Set up an openFrameworks project for classification

  • Collect and label data

  • Use the data to control audio output

  • Observe output and evaluate model

Session 2:

  • Set up an openFrameworks project for regression

  • Collect data and train a neural network

  • Use the neural network output to control audio parameters

  • Adjust inputs to refine the output behaviour

Session 3:

  • Set up an openFrameworks project for series classification

  • Design gestures as control data

  • Use classification of gestures to control audio output

  • Refine gestural input to attain desired output

Session 4:

  • Explore methods for increasing complexity

  • Integrate visuals for multimodal output

  • Build mapping layers

  • Use models in parallel and series

Session Study Topics

Session 1:

  • Supervised Static Classification

  • Data Collection and Labelling

  • Classification Implementation

  • Model Evaluation

Session 2:

  • Supervised Static Regression

  • Data Collection and Training

  • Regression Implementation

  • Model Evaluation

Session 3:

  • Supervised Series Classification

  • Gestural Recognition

  • Dynamic Time Warp Implementation

  • Model Evaluation

Session 4:

  • Data Sources

  • Multimodal Integration

  • Mapping Techniques

  • Model Systems

Requirements

  • A computer with internet connection

  • Installed versions of the following software:

    • openFrameworks

    • ofxRapidLib

    • ofxMaxim

  • Preferred IDE (eg. XCode / Visual Studio)

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in using machine learning to create audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He is close to completion of his PhD in Arts and Computational Technology at Goldsmiths, University of London.

An introduction to Flora for monome norns – On-demand

Level: Some experience of norns required

Flora is an L-systems sequencer and bandpass-filtered sawtooth engine for monome norns. In this workshop you will learn how L-system algorithms are used to produce musical sequences while exploring the script’s UI and features.

Flora on Vimeo

By the end of the first workshop, you will be able to:

  • Navigate the Flora UI and parameters menus to build and perform your own compositions

  • Create dynamically shaped, multinodal envelopes to modulate Flora’s bandpass-filtered sawtooth engine

  • Build generative polyrhythms and delays into your compositions

  • Use crow and/or midi-enabled controllers and synthesizers to play Flora

Session study topics:

  • Sequencing with L-system algorithms

  • Physical modeling synthesis with bandpass filters

  • Generate multi-nodal envelope

  • Norns integration with midi and/or crow

 

Requirements

  • A computer and internet connection

  • A norns device with Flora installed

  • Optional: A midi-enabled controller and/or synthesizer

 

We have a number of sponsorship places available, if the registration fee is a barrier to you joining the workshop please contact laura@stagingmhs.local.

 

About the workshop leader 

Jonathan Snyder is a Portland, Oregon based sound explorer and educator.

Previously, he worked for 22 years as a design technologist, IT manager, and educator at Columbia University’s Media Center for Art History, Method, and Adobe.

TouchDesigner meetup 17th April – Audio visualisation

Date & Time: Saturday 17th April 5pm – 7pm UK / 6pm – 8pm Berlin

Level: Open to all levels

Join the online meetup for expert talks on audio visualisation. Meet and be inspired by the TouchDesigner community.

The meetup runs via Zoom. The main session features short presentations from TouchDesigner users. Breakout rooms are created on the spot on specific topics, and you can request a new topic at any time.

The theme for this session is Audio visualisation, hosted by Bileam Tschepe with presentations from the community.

In the breakout rooms, you can share your screen to show other participants something you’re working on, ask for help, or help someone else.

Presenters:

Name: Ian MacLachlan
Title: Terraforming with MIDI
Bio: Bjarne Jensen is an experimental audio/visual artist from the Detroit area with an interest in creating interactive systems for spatial transformation.
Name: Jean-François Renaud
Title: Generating MIDI messages to synchronize sound and visual effect in TouchDesigner
Description : Instead of using the audio analysis strategy to affect the rendering, we are focusing on building small generative machines using the basic properties of notes (pitch, velocity), and we look at different means to manage triggering. At the end, the goal is still to merge and to make alive what you hear and what you see.
Bio: Interactive media professor at École des médias, UQAM, Montréal
Vimeohttps://vimeo.com/morpholux 
Name: Bileam Tschepe
Title: algorhythm – a first look into my software
Description: I’ve been working on a tool for audiovisual live performances and I’d like to share its current state and see if people are interested in collaborating and working with me
Bio: Berlin based artist and educator who creates audio-reactive, interactive and organic digital artworks, systems and installations in TouchDesigner, collaborating with and teaching people worldwide.
YouTube: Bileam Tschepe

Requirements

  • A Zoom account
  • A computer and internet connection

Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

Supported by

Understanding Indian rhythm through simple algorithms – On demand

Level: All Max users

South Indian Carnatic music is home to a huge array of fascinating rhythms, composed from algorithms. Rooted in maths and aesthetics, Carnatic music has many facets that can be applied to computer music. In this workshop you will be given an introduction to this tradition, and provided with the opportunity to observe, create, and hack various patches that demonstrate some of these ideas.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Be capable of reciting a simple rhythmic konnakol phrase

  • Be capable of conceiving simple rhythmic algorithms

  • Be capable of translating these concepts into simple Max patches

  • Understand South Indian rhythmic concepts & terminology such as Tala, Jhati, and Nadai

Session Study Topics

  • Learning a konnakol phrase

  • Understanding Tala cycles

  • Understanding Jhati and Nadai

  • Translating rhythmic algorithms into code

Requirements

  • A computer and internet connection

  • A webcam and mic

  • A Zoom account

  • Access to a copy of Max 8 (i.e. trial or full license)

About the workshop leader

Dom Aversano is a Valencian and London based composer and percussionist with a particular interest in combining ideas from the South Indian classical and Western music traditions. He has performed internationally as a percussionist, and produced award-winning installation work that has been exhibited in Canada, Italy, Greece, Australia, and the UK.

For a decade Dom has studied South Indian Carnatic music in London and in Chennai. He has studied with mridangam virtuoso Sri Balachandar, the resident percussionist of The Bhavan music centre in London, as well as shorter periods with Somashekar Jois and M N Hariharan.