Discuss techniques for rendering material on VR headsets
Implement the Csound API within a C++ application
Create mixed raymarched and raster based graphics
Create an interactive visual scene using a single fragment shader
Generate the mandelbulb fractal
Generate procedural audio using Csound
Map controller position and rotation to audiovisual parameters using machine learning
Requirements
A computer and internet connection
A web cam and mic
A Zoom account
Cloned copy of the ImmersAV toolkit plus dependencies
VR headset capable of connecting to SteamVR
Course content
---- Course Overview
---- Requirements
---- Installation of ImmersAV
---- Session 1 Worksheet
---- Session 2 Worksheet
---- Reading Material
---- Session 2 Files
---- Part 1 - Project Setup
---- Part 2 - Audio - Environmental Noise
---- Part 3 - Audio - Granular Patch
---- Part 4 - Visuals - Infinite Plane
---- Part 5 - Visuals - Colour the Scene
---- Part 6 - Visuals - Mandelbulb
---- Part 7 - Studio - Sound Source Placement
---- Part 8 - VR Rendering
---- Part 1 - Setup
---- Part 2 - Parameter Preperation
---- Part 3 - Parameter Randomisation
---- Part 4 - Neural network input
---- Part 5 - Machine learning test
---- Part 6 - Controller bindings
Who is this course for
These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.
Useful links
About the workshop leader
Bryan Dunphy graduated in 2021 from a PhD at Goldsmiths University. He specialises in audio-visual, immersive performances and creations.
Most of his work uses Machine Learning.