Dates
Session 1: Tuesday 9th February 6pm GMT
Session 2: Tuesday 16th February 6pm GMT
Level: Advanced
These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.
Session Learning Outcomes
By the end of this session a successful student will be able to:
Setup and use the ImmersAV toolkit
Discuss techniques for rendering material on VR headsets
Implement the Csound API within a C++ application
Create mixed raymarched and raster based graphics
Create an interactive visual scene using a single fragment shader
Generate the mandelbulb fractal
Generate procedural audio using Csound
Map controller position and rotation to audiovisual parameters using machine learning
Session Study Topics
Native C++ development for VR
VR rendering techniques
Csound API integration
Real-time graphics rendering techniques
GLSL shaders
3D fractals
Audio synthesis
Machine learning
Requirements
A computer and internet connection
A web cam and mic
A Zoom account
Cloned copy of the ImmersAV toolkit plus dependencies
VR headset capable of connecting to SteamVR
About the workshop leader
Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.