Getting started with Interactive Machine Learning for openFrameworks – On-demand

Level: Intermediate – C++ required

Using openFrameworks, ofxRapidLib and ofxMaximilian, participants will learn how to integrate machine learning into generative applications. You will learn about the interactive machine learning workflow and how to implement classification, regression and gestural recognition algorithms.

You will  explore a static classification approach that employs the k-Nearest Neighbour (KNN) algorithm to categorise data into discrete classes. This will be followed by an exploration of static regression problems that will use multilayer perceptron neural networks to perform feed-forward, non-linear regression on a continuous data source. You will also explore an approach to temporal classification using dynamic time warping which allows you to analyse and process gestural input

This knowledge will allow you to build your own complex interactive artworks.

By the end of this series the participant will be able to:

Overall:

  • Set up an openFrameworks project for machine learning

  • Describe the interactive machine learning workflow

  • Identify the appropriate contexts in which to implement different algorithms

  • Build interactive applications based on classification, regression and gestural recognition algorithms

Session 1:

  • Set up an openFrameworks project for classification

  • Collect and label data

  • Use the data to control audio output

  • Observe output and evaluate model

Session 2:

  • Set up an openFrameworks project for regression

  • Collect data and train a neural network

  • Use the neural network output to control audio parameters

  • Adjust inputs to refine the output behaviour

Session 3:

  • Set up an openFrameworks project for series classification

  • Design gestures as control data

  • Use classification of gestures to control audio output

  • Refine gestural input to attain desired output

Session 4:

  • Explore methods for increasing complexity

  • Integrate visuals for multimodal output

  • Build mapping layers

  • Use models in parallel and series

Session Study Topics

Session 1:

  • Supervised Static Classification

  • Data Collection and Labelling

  • Classification Implementation

  • Model Evaluation

Session 2:

  • Supervised Static Regression

  • Data Collection and Training

  • Regression Implementation

  • Model Evaluation

Session 3:

  • Supervised Series Classification

  • Gestural Recognition

  • Dynamic Time Warp Implementation

  • Model Evaluation

Session 4:

  • Data Sources

  • Multimodal Integration

  • Mapping Techniques

  • Model Systems

Requirements

  • A computer with internet connection

  • Installed versions of the following software:

    • openFrameworks

    • ofxRapidLib

    • ofxMaxim

  • Preferred IDE (eg. XCode / Visual Studio)

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in using machine learning to create audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He is close to completion of his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Immersive AV Composition -On demand / 2 Sessions

Level: Advanced

These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Setup and use the ImmersAV toolkit

  • Discuss techniques for rendering material on VR headsets

  • Implement the Csound API within a C++ application

  • Create mixed raymarched and raster based graphics

  • Create an interactive visual scene using a single fragment shader

  • Generate the mandelbulb fractal

  • Generate procedural audio using Csound

  • Map controller position and rotation to audiovisual parameters using machine learning

Session Study Topics

  • Native C++ development for VR

  • VR rendering techniques

  • Csound API integration

  • Real-time graphics rendering techniques

  • GLSL shaders

  • 3D fractals

  • Audio synthesis

  • Machine learning

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Cloned copy of the ImmersAV toolkit plus dependencies

  • VR headset capable of connecting to SteamVR

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Visual Music Performance with Machine Learning – On demand

Level: Intermediate

In this workshop you will use openFrameworks to build a real-time audiovisual instrument. You will generate dynamic abstract visuals within openFrameworks and procedural audio using the ofxMaxim addon. You will then learn how to control the audiovisual material by mapping controller input to audio and visual parameters using the ofxRapid Lib add on.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Create generative visual art in openFrameworks

  • Create procedural audio in openFrameworks using ofxMaxim

  • Discuss interactive machine learning techniques

  • Use a neural network to control audiovisual parameters simultaneously in real-time

Session Study Topics

  • 3D primitives and perlin noise

  • FM synthesis

  • Regression analysis using multilayer perceptron neural networks

  • Real-time controller integration

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Installed version of openFrameworks

  • Downloaded addons ofxMaxim, ofxRapidLib

  • Access to MIDI/OSC controller (optional – mouse/trackpad will also suffice)

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.

About
Privacy