Create Music Visuals with TouchDesigner / Going Further – On-demand
Level: Beginner
Overview
TouchDesigner is a tool that gains popularity among musicians who want to create visuals for their music and do audio-visual experiments. The goal of this course is to show how one can start with TouchDesigner and not risk being discouraged. You will learn the basic concepts underpinning creation and rendering of 3D geometries, what different parameters of PBR materials mean and how particle systems work. The concepts you learn will be used to create shiny and noisy audio-reactive visualizations that can be plugged into your live performance straight away. The course will equip you with tools and maps which will make your further TouchDersigner journey more productive and fun. This series is a continuation of the workshop Create Music Visuals with TouchDesigner – Fundamentals.
Session Learning Outcomes
Session 1
-
Change basic geometries
-
Understand the difference between CPU and GPU calculations
-
Make different audio frequencies trigger different events
-
Create an audio-reactive visualization using instanced geometries
Session 2
-
Import 3D assets
-
Perform transformations on imported 3D geometries (increase and decrease polygon count, translate)
-
Understand and Use PBR maps
-
Set up PBR shading for 3D assets
Session 3
-
Be confident with the main properties of a particle and particle system
-
Create a basic particle system
-
Configure emission properties of a particle system
-
Use force bearing objects to influence particles behavior
-
Confidently navigate the TD learning resources and know where to get help
Session Study Topics
Session 1
-
SOP generators and filters
-
Operators that use CPU and GPU
-
Geometry Instancing
-
Connecting Trigger CHOP to different frequencies of an audio-spectrum
Session 2
-
FBX and OBJ formats
-
Polyreduce and Subdivide SOPs, Delete SOP, Transform SOP
-
The theory of Physically-Based Rendering and Shading
-
PBR maps
-
PBR setup in Touch Designer
Session 3
-
Particle SOP inputs (emitter, attractor, force)
-
Particle SOP parameters
-
Differences between CPU and GPU particle systems
-
Navigating TouchDesigner Resources
Requirements
-
A computer with internet connection
-
A web cam and mic
-
A Zoom account
-
A three button mouse or Apple Track Pad appropriately configured
-
TouchDesigner installed (free version suffices https://derivative.ca/download)
-
If your on Mac please check TouchDesigner can run on your system (i.e. has basic GPU requirements such as Intel HD4000 or better)
About the workshop leader
Dancing Pixels (Masha Rozhnova) is a London based artist who creates audio-visual performances and videos for musicians. For the past three years she has been using TouchDesigner as the main tool for content creation, show control and as an engine to enable interaction with the audience. She performed at Live Performers Meeting in Rome, at New River studios and Crux events in London.
Create Music Visuals with TouchDesigner / Fundamentals – On-demand
Note: Add this workshop to your cart along with the full workshop series to get £10 off at checkout
Level: Beginner
Overview
TouchDesigner is a tool that gains popularity among musicians who want to create visuals for their music and do audio-visual experiments. This workshop is an introduction to the series “Create Music Visuals with TouchDesigner – Going Further”. You will be guided through TouchDesigner environment, learn how to create 3D scenes and control visual parameters with sound, discuss audio-visual art with your peers and, of course, make visuals! Try it and see if you want to learn more in the coming weeks with Going Further.
-
Discuss some names in the history of audio-visual art and ideas that inspired the field
-
Understand fundamentals of 3D rendering in TD
-
Create basic geometric shapes, light and texture them
-
Create audio-reactive visual progression that switches between camera views
Computer animation in the 21st century – the 3D space.
-
Ideas and key figures in the history of audio-visual art
-
3D rendering set up
-
Simple 2D compositing
-
Triggering actions and controlling parameters with CHOPs
Requirements
-
A computer with internet connection
-
A three button mouse or Apple Track Pad appropriately configured
-
TouchDesigner installed (free version suffices https://derivative.ca/download)
-
If your on Mac please check TouchDesigner can run on your system (i.e. has basic GPU requirements such as Intel HD4000 or better)
About the workshop leader
Dancing Pixels (Masha Rozhnova) is a London based artist who creates audio-visual performances and videos for musicians. For the past three years she has been using TouchDesigner as the main tool for content creation, show control and as an engine to enable interaction with the audience. She performed at Live Performers Meeting in Rome, at New River studios and Crux events in London.
Creative MIDI FX in Ableton Live – On demand
Level: Beginner
Ableton Live offers a vast playground of musical opportunities to create musical compositions and productions. Live’s native MIDI FX provides a range of tools to allow the composer and producer to create ideas in a myriad of ways. Max For Live complements these tools and expands musical possibilities. In this workshop you will creatively explore and deploy a range of MIDI FX in a musical setting. This workshop aims to provide you with suitable skills to utilise the creative possibilities of MIDI FX in the Ableton Live environment.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Identify and deploy MIDI FX
-
Explore native and M4L MIDI FX in Live
-
Render the output of MIDI FX into MIDI clips for further manipulation
-
Apply MIDI FX to create novel musical and sonic elements
Session Study Topics
-
Using MIDI FX
-
Native and M4L MIDI FX
-
Rendering MIDI FX outputs
-
Creatively using MIDI FX
Requirements
-
A computer and internet connection
-
Access to a copy of Live Suite with M4L (i.e. trial or full license)
About the work shop leader
Mel is a London based music producer, vocalist and educator.
She spends most of her time teaching people how to make music with Ableton Live and Push. When she’s not doing any of the above, she makes educational content and helps music teachers and schools integrate technology into their classrooms. She is particularly interested in training and supporting female and non-binary people to succeed in the music world.
Supported by
Immersive AV Composition -On demand / 2 Sessions
Level: Advanced
These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Setup and use the ImmersAV toolkit
-
Discuss techniques for rendering material on VR headsets
-
Implement the Csound API within a C++ application
-
Create mixed raymarched and raster based graphics
-
Create an interactive visual scene using a single fragment shader
-
Generate the mandelbulb fractal
-
Generate procedural audio using Csound
-
Map controller position and rotation to audiovisual parameters using machine learning
Session Study Topics
-
Native C++ development for VR
-
VR rendering techniques
-
Csound API integration
-
Real-time graphics rendering techniques
-
GLSL shaders
-
3D fractals
-
Audio synthesis
-
Machine learning
Requirements
-
A computer and internet connection
-
A web cam and mic
-
A Zoom account
-
Cloned copy of the ImmersAV toolkit plus dependencies
-
VR headset capable of connecting to SteamVR
About the workshop leader
Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.