Immersive AV Composition -On demand / 2 Sessions

Level: Advanced

These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Setup and use the ImmersAV toolkit

  • Discuss techniques for rendering material on VR headsets

  • Implement the Csound API within a C++ application

  • Create mixed raymarched and raster based graphics

  • Create an interactive visual scene using a single fragment shader

  • Generate the mandelbulb fractal

  • Generate procedural audio using Csound

  • Map controller position and rotation to audiovisual parameters using machine learning

Session Study Topics

  • Native C++ development for VR

  • VR rendering techniques

  • Csound API integration

  • Real-time graphics rendering techniques

  • GLSL shaders

  • 3D fractals

  • Audio synthesis

  • Machine learning

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Cloned copy of the ImmersAV toolkit plus dependencies

  • VR headset capable of connecting to SteamVR

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Interface design in Max with JS/JSUI

In this workshop we’ll look at scripting techniques for changing the appearance and behaviour of Max patchers, and the use of Javascript graphics to build new types of on-screen displays and controls.

Max contains an embedded Javascript engine which can be used to control aspects of Max from a textual language, providing more power and versatility than the default click-and-drag graphical interface which Max programmers are used to. The Javascript engine also has an embedded graphics system, allowing totally new and innovative interface elements to be created and embedded into the familiar Max world.
Topics:

– Max
– Javascript
– Patchers and scripting
– Graphics libraries

Requirements:
– Difficulty level: intermediate
– A good working knowledge of Max is expected
– Some familiarity with textual programming languages and graphics programming would be useful, but not required.

About the workshop leader:

Nick Rothwell is a composer, performer, software architect, coder and visual artist. He has built media performance systems for projects with Ballett Frankfurt and Vienna Volksoper, composed sound scores for Aydın Teker (Istanbul) and Shobana Jeyasingh Dance, live coded in Mexico and in Berlin with sitar player Shama Rahman, written software for Studio Wayne McGregor and the Pina Bausch Foundation, and developed algorithmic visuals for large-scale outdoor installations in Poland, Estonia, Cambridge Music Festival and Lumiere (London / Durham). He also teaches at Ravensbourne University London and writes for Sound On Sound magazine.