Create procedural audio in openFrameworks using ofxMaxim
Discuss interactive machine learning techniques
Use a neural network to control audiovisual parameters simultaneously in real-time
Requirements
A computer and internet connection
A web cam and mic
A Zoom account
Installed version of openFrameworks
Downloaded addons ofxMaxim, ofxRapidLib
Access to MIDI/OSC controller (optional - mouse/trackpad will also suffice)
Course content
---- Course Overview
---- Requirements
---- Pre-course preparation
---- Work sheet with exercises
---- Part 1 - Sphere setup
---- Part 2 - Phong lighting
---- Part 3 - Camera + Normal matrix
---- Part 4 - Vertex displacement
---- Part 5 - ofxMaxim setup
---- Part 6 - Simple FM synth
---- Part 7 - Machine Learning - Data collection
---- Part 8 - Machine Learning - Train + Run model
---- Part 9 - OSC controller
---- Finished Project on Github
Who is this course for
In this workshop you will use openFrameworks to build a real-time audiovisual instrument. You will generate dynamic abstract visuals within openFrameworks and procedural audio using the ofxMaxim addon. You will then learn how to control the audiovisual material by mapping controller input to audio and visual parameters using the ofxRapid Lib add on.
Useful links
About the workshop leader
Bryan Dunphy graduated in 2021 from a PhD at Goldsmiths University. He specialises in audio-visual, immersive performances and creations.
Most of his work uses Machine Learning.