TouchDesigner meetup – LIVE Session / 31st July
Date & Time: Saturday 31st July 4pm UK / 5pm Berlin / 8am LA / 11am NYC
Level: Open to all levels
Meetups are a great way to meet and be inspired by the TouchDesigner community.
What to expect?
The meetup runs via Zoom, the main session will be 2-hours in length with an additional hour open to the community for collaboration and sharing in breakout rooms.
This session focuses on Shaders and will feature presentations from TouchDesigner experts:
Josef Luis Pelz – Real-time cloth simulation in TD with GLSL
We’ll have a look at a versatile real-time cloth simulation. Besides showing some example results, I’ll try to briefly explain how the system works and how I developed it step by step. It involves pixel, vertex and compute shader.
Josef is a creative coder, generative art enthusiast and mathematician living and working in Berlin. With a background in mathematics and computer science, he is linking his passion for creative problem solving and aesthetics.
For more info check out: instagram.com/josefluispelz/ https://twitter.com/JosefPelz https://josefluispelz.com/
Louise Lessél – Shader conversions from TouchDesigner to Raspberry Pi
Louise will present an asset she created that allows you to quickly convert Shadertoy shaders to use in your Touchdesigner projects, and to go one step further and use the shader in your Raspberry Pi projects by using Pi3D, to run either screens or LED hub-75 matrixes. The asset is available in the Touchdesigner community assets.
Louise Lessél is a Danish New Media artist and Creative Technologist based in New York. She creates digital projections and interactive light installations based on scientific facts and data input, often exploring the limits of the human perceptual system or raising ecological awareness.
For more info checkout: Instagram: @louiselessel
Torin Blankensmith & Peter Whidden – Interactive 3D Shaders with Shader Park and TouchDesigner
Torin and Peter will be showcasing their new plugin which allows you to use Shader Park within TouchDesigner to quickly script interactive shaders. Explore 3D shader programming through a Javascript interface without the complexity of GLSL. Shader Park is an open source project for creating real-time graphics and procedural animations. Follow along through multiple examples using a live code editor. Expand upon the examples and bring them into TouchDesigner to create your own interactive graphics. Browse the Shader Park community’s gallery where you can fork other people’s creations or feature your own.
Torin is a freelance creative technologist, teacher, and real-time graphics artist focusing on mixed reality installations and interactive experiences. He currently works at Studio Elsewhere creating restorative immersive environments in collaboration with neuroscientists focused on patient and medical worker’s well being.
Peter is a creative software engineer whose work spans physics, astronomy, machine learning, and computer graphics. He currently works at the NY Times R&D lab focused on emerging computer vision and graphics techniques.
For more info checkout: Instagram: @blankensmithing, @peterwhidden / Twitter: @tblankensmith
Following these presentations breakout rooms are created where you can:
- Talk to the presenters and ask questions
- Join a room on topics of your choice
- Show other participants your projects, ask for help, or help others out
- Collaborate with others
- Meet peers in the chill-out breakout room
Requirements
-
A computer and internet connection
-
A Zoom account
Berlin Code of Conduct
We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.
Supported by
![]()
Natural Machines with Dan Tepfer – LIVESTREAM
Date & Time: Thursday 17th June 2021 6pm UK / 7pm Berlin / 10am LA / 1pm NYC
In this live stream we’ll talk with Dan Tepfer and hear more about his project Natural Machines.
In an age of unprecedented technological advancement, Dan Tepfer is changing the definition of what a musical instrument can be. Featured in an NPR documentary viewed by 1.5 million people, Dan Tepfer shows his pioneering skill in this concert by programming a Yamaha Disklavier to respond in real time to the music he improvises at the piano while another computer program turns the music into stunning animated visual art. Called “fascinating and ingenious” by Rolling Stone, the Natural Machines performance lives at a deeply unique intersection of mechanical and organic processes, making it “more than a solo piano album… a multimedia piece of contemporary art so well made in its process and components and expressed by such a thoughtful, talented, evocative pianist… that it becomes a complete experience” (NextBop).
Music Hackspace YouTube
Overview of speaker
Dan Tepfer is a French-American jazz pianist and composer.
One of his generation’s extraordinary talents, Dan Tepfer has earned an international reputation as a pianist-composer of wide-ranging ambition, individuality, and drive—one “who refuses to set himself limits” (France’s Télérama). The New York City-based Tepfer, born in 1982 in Paris to American parents, has performed around the world with some of the leading lights in jazz and classical music, and released ten albums of his own.
Tepfer earned global acclaim for his 2011 release Goldberg Variations / Variations, a disc that sees him performing J.S. Bach’s masterpiece as well as improvising upon it—to “elegant, thoughtful and thrilling” effect (New York magazine). Tepfer’s newest album, Natural Machines, stands as one of his most ingeniously forward-minded yet, finding him exploring in real time the intersection between science and art, coding and improvisation, digital algorithms and the rhythms of the heart. The New York Times has called him “a deeply rational improviser drawn to the unknown.”
Tepfer’s honors include first prizes at the 2006 Montreux Jazz Festival Solo Piano Competition, the 2006 East Coast Jazz Festival Competition, and the 2007 American Pianists Association Jazz Piano Competition, as well as fellowships from the American Academy of Arts and Letters (2014), the MacDowell Colony (2016), and the Fondation BNP-Paribas (2018).
Introduction to beat detection and audio-reactive visuals in TouchDesigner – On demand
Level: Beginner
TouchDesigner is a powerful tool for creating live performances, installations, real time visuals and complex digital systems. In this workshop you’ll learn the basic functioning of three node-types and how to use them to analyse audio, use the data to manipulate graphics and how to organize and navigate your TouchDesigner network.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Input audio into TouchDesigner
-
Extract relevant data from input sources
-
Use data to manipulate graphics
-
Create simple generative visuals
-
Navigate the TouchDesigner network
Session Study Topics
-
Audio input sources
-
Beat detection (frequency analysis, timesclicing etc.)
-
Creation and manipulation of generative visuals
-
Network organisation
Requirements
-
A computer with internet connection
-
A web cam and mic
-
A three button mouse or to configure Apple Track Pad appropriately
-
TouchDesigner (free version suffices https://derivative.ca/download)
-
If your on Mac please check TouchDesigner can run on your system (i.e. has basic GPU requirements such as Intel HD4000 or better)
About the workshop leader
Bileam Tschepe aka elekktronaut is a Berlin based artist and educator who creates audio-reactive, interactive and organic digital artworks, systems and installations in TouchDesigner, collaborating with and teaching people worldwide.
Immersive AV Composition -On demand / 2 Sessions
Level: Advanced
These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Setup and use the ImmersAV toolkit
-
Discuss techniques for rendering material on VR headsets
-
Implement the Csound API within a C++ application
-
Create mixed raymarched and raster based graphics
-
Create an interactive visual scene using a single fragment shader
-
Generate the mandelbulb fractal
-
Generate procedural audio using Csound
-
Map controller position and rotation to audiovisual parameters using machine learning
Session Study Topics
-
Native C++ development for VR
-
VR rendering techniques
-
Csound API integration
-
Real-time graphics rendering techniques
-
GLSL shaders
-
3D fractals
-
Audio synthesis
-
Machine learning
Requirements
-
A computer and internet connection
-
A web cam and mic
-
A Zoom account
-
Cloned copy of the ImmersAV toolkit plus dependencies
-
VR headset capable of connecting to SteamVR
About the workshop leader
Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.
Visual Music Performance with Machine Learning – On demand
Level: Intermediate
In this workshop you will use openFrameworks to build a real-time audiovisual instrument. You will generate dynamic abstract visuals within openFrameworks and procedural audio using the ofxMaxim addon. You will then learn how to control the audiovisual material by mapping controller input to audio and visual parameters using the ofxRapid Lib add on.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Create generative visual art in openFrameworks
-
Create procedural audio in openFrameworks using ofxMaxim
-
Discuss interactive machine learning techniques
-
Use a neural network to control audiovisual parameters simultaneously in real-time
Session Study Topics
-
3D primitives and perlin noise
-
FM synthesis
-
Regression analysis using multilayer perceptron neural networks
-
Real-time controller integration
Requirements
-
A computer and internet connection
-
A web cam and mic
-
A Zoom account
-
Installed version of openFrameworks
-
Downloaded addons ofxMaxim, ofxRapidLib
-
Access to MIDI/OSC controller (optional – mouse/trackpad will also suffice)
About the workshop leader
Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.
Video Synthesis with Vsynth for Max – LIVE Session
Dates: Thursdays 4th / 11th / 18th / 25th February 2021 6pm GMT
Level: Intermediate +
Overview
In this series of 4 workshops, we’ll look at how to interconnect the different 80 modules that come with Vsynth, exploring video techniques and practices that can create aesthetics associated with the history of the electronic image but also complex patterns founded in some basic functions of nature.
Vsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity made it the perfect tool to introduce yourself to video synthesis and image processing. Since It can be connected to other parts of Max, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job.
Here’s what you’ll learn in each workshop:
Workshop 1:
Learn the fundamentals of digital video-synthesis by diving into the different video oscillators, noise generators, mixers, colorizers and keyers. By the end of this session students will be able to build simple custom video-synth patches with presets.
- Video oscillators, mixers, colorizers.
Workshop 2:
-
Modulations (phase, frequency, pulse, hue, among others).
In this workshop we will focus on the concept of modulation so that students can add another level of complexity to their patches. We’ll see the differences between modulating parameters of an image with simple LFOs or with other images. Some of the modulations we’ll cover are Phase, Frequency, Pulse Width, Brightness & HUE.
Workshop 3:
-
Filters/convolutions and video feedback techniques.
-
This 3rd workshop is divided in two. In the first half, we’ll go in depth in what actually means low or high frequencies in the image world. We’ll then use Low-pass and High-pass filters/convolutions in different scenarios to see how they affect different images.
-
In the second, half we’ll go through a lot of different techniques that uses the process of video-feedback. From simple “trails” effects to more complex reaction-diffusion like patterns!
Workshop 4:
-
Working with scenes and external controllers (audio, midi, arduino).
-
In this final workshop we’ll see how to bundle in just one file several Vsynth patches/scenes with presets for live situations. We’ll also export a patch as a Max for Live device and go in depth into “external control” in order to successfully control Vsynth parameters with audio, midi or even an Arduino.
Requirements
-
Intermediate knowledge of Max and Jitter
-
Have latest Max 8 installed
-
Basic knowledge of audio-synthesis and/or computer graphics would be useful
About the workshop leader
Kevin Kripper (Buenos Aires, 1991) is a visual artist and indie software developer. He’s worked on projects that link art, technology, education and toolmaking, which have been exhibited and awarded in different art and science festivals. Since 2012 he’s been dedicated to creating digital tools that extend the creative possibilities of visual artists and musicians from all over the world.
Jitter in Ableton – On demand
Level: Intermediate
Cycling 74’s Jitter offers a vast playground of programming opportunities to create your own visual devices. In this workshop you will build your own visual device that utilizes a audio signals to manipulate imaginary. This workshop aims to provide you with suitable skills to begin exploring the jitter environment.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Discover the basics of the Jitter framework.
-
Explore options for analysing audio signals to gather data to control visuals
-
Deploy objects suitable for making visuals
-
Apply processes from data acquired from audio signals to control visuals.
-
Apply UI elements and save into a Max For Live device.
Session Study Topics
- The jitter landscape
- Audio analysis
- Using visual objects
- Control visual objects
- UI & M4L devices
Requirements
-
A computer and internet connection
-
A good working knowledge of computer systems
-
An basic awareness of audio processing
-
Good familiarity with MSP
-
Access to a copy of Max 8 (i.e. trial or full license) or Live Suite (M4L)
About the workshop leader
Ned Rush aka Duncan Wilson is a musician, producer and performer. He’s most likely known best for his YouTube channel, which features a rich and vast quantity of videos including tutorials, software development, visual art, sound design, internet comedy, and of course music.
Interactive video with Jitter – LIVE Session
Dates:
Session 1 – Monday 11th January 6pm – 8pm GMT
Session 2 – Monday 18th January 6pm – 8pm GMT
Session 3 – Monday 25th January 6pm – 8pm GMT
Session 4 – Monday 1st February 6pm – 8pm GMT
Level: Beginner with a basic knowledge of Max
What you will learn
The workshop is aimed at anyone who would like to learn how to create interactive visuals with the software Max/MSP/Jitter from Cycling74.
You definitely don’t need to be a Max guru to take part in the workshop, although a basic knowledge of the program is required.
We will start from the very basics of working with videos and images in Max, learning how to import footage and live camera stream and how those can be processed in the software. We will then see how to unlock the full control over visuals manipulation and analysis using the GEN environment, which allows us to work on video and images at the pixel level.
We will finally proceed to introduce the OpenGL implementation inside Max, with which we can create 3D graphics and visually satisfying post-processing effects.
A major focus of the workshop will be to include interaction in our patches.
We will use audio streams to control and modify the parameter of our visuals, as well as others interactive inputs like camera video stream.
At the end of the workshop we will have a showreel of the works created by the participants during the four weeks.
Requirements
- A computer and internet connection
Topics
Session 1
- Workshop holder and participants introduction.
- Showreel on what can be achieved using Max/Jitter for the Visuals.
- Starting with video in Max: read and play a movie.
- Open the webcam video-stream inside Max.
- Introduction to the Jitter Matrix.
- Create images by filling pixels algorithmically.
- Create images using random and noise generators.
- Explanation on Perlin Noise.
- Drive video-effects using an audio stream.
Session 2:
- Introduction to OpenGL in Max.
- Apply materials to 3D shapes.
- Explanation on light and color in GL in Max.
- Control 3D shapes parameters using an audio stream.
- Introducing Textures.
- Introduction to [jit.gen].
- Reviewing simple trigonometry concepts.
- Explanation on Vectors.
- Create large numbers of 3D shapes using [jit.gen] and [jit.gl.multiple].
Session 3:
- Animate arrays of 3D shapes using [jit.gen] and input streams.
- Create a simple particle system with [jit.gen].
- Procedurally create 2D/3D shapes using [jit.gen] and [jit.gl.mesh]. – Animate procedural shapes using an audio stream.
- Create procedural texture using [jit.gl.pix].
Session 4:
- Develop audio-reactive visuals using the concepts seen in the previous sessions.
- Capture the 3D scene using [jit.gl.node].
- Apply post-processing effects to the scene.
- Introduction to [jit.gl.pass].
- Performance consideration on visuals in Max/Jitter.
- Conclusion.
About the workshop leader
Federico Foderaro is an audiovisual composer, teacher and designer for interactive multimedia installations, author of the YouTube channel Amazing Max Stuff.
Learn to program amazing interactive particles systems with Jitter
In this workshop, you will learn to build incredible live videos with particles systems, using Max and Jitter.
Cycling’74 has recently released GL3, which ties together more closely Jitter with Open GL, and optimises use of the GPU. With this recent update available in the package manager, you can build highly performance videos without having to code them in C++.
Requirements
- Latest version of Max 8 installed on Mac or Windows
- A good working knowledge of Max is expected
- Understanding of how the GEN environment works in Jitter
- Some familiarity with textual programming languages
- A knowledge of basic calculus is a bonus
- The GL3 package installed
- To install this package open the “Package Manager” from within Max, look for the GL3 package and click “install”.
What you will learn
Session 1, 20th October, 6pm UK / 10am PDT / 1pm EST:
– Introduction to GL3 features
– Quick overview of most of the examples in the GL3 package
– Build a simple particle system from scratch
– Explorations with gravity/wind
– Exploration with target attraction
Session 2, 27th October, 6pm UK / 10am PDT / 1pm EST:
– Improve particle system with rendering billboard shader
– Creation of a “snow” or “falling leaves” like effect
– Starting to introduce interactivity in the system
– Using the camera input
– Connecting sound to your patches
Session 3, 3rd November, 6pm UK / 10am PDT / 1pm EST:
– Improve the system interactivity
– Particles emitting from object/person outline taken from camera
– Create a particle system using 3D models and the instancing technique
– Transforming an image or a video stream into particles
Session 4, 10th November, 6pm UK / 10am PDT / 1pm EST:
– Introduction to flocking behaviours and how to achieve them in GL3
– Create a 3D generative landscape and modify it using the techniques from previous sessions
– Apply post-processing effects
About the workshop leader:
Federico Foderaro is an audiovisual composer, teacher and designer for interactive multimedia installations, author of the YouTube channel Amazing Max Stuff.
Graduated in Electroacoustic Musical Composition at the Licinio Refice Conservatory in Frosinone cum laude, he has lived and worked in Berlin since 2016.
His main interest is the creation of audiovisual works and fragments, where the technical research is deeply linked with the artistic output.
The main tool used in his production is the software Max/MSP from Cycling74, which allows for real-time programming and execution of both audio and video, and represents a perfect mix between problem-solving and artistic expression.
Beside his artistic work, Federico teaches the software Max/MSP, both online and in workshops in different venues. The creation of commercial audio-visual interactive installations is also a big part of his work life, having led in the years to satisfactory collaborations and professional achievements.
Video synthesis with Vsynth workshop
Level: Intermediate
In this series of 4 2-hours workshop, Kevin Kripper, the author of Vsynth, explains how to interconnect the different 80 modules that come with Vsynth, exploring video techniques and practices that can create aesthetics associated with the history of the electronic image but also complex patterns founded in some basic functions of nature.
Here’s what you’ll learn in each workshop:
Lesson 1: video oscillators, mixers, colorizers.
Lesson 2: modulations (pm, fm, pwm, hue, among others).
Lesson 3: filters/convolutions and video feedback techniques.
Lesson 4: working with presets, scenes, audio and midi.
Vsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity made it the perfect tool to introduce yourself to video synthesis and image processing. Since It can be connected to other parts of Max, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job.
Requirements
- Basic knowledge of Max and Jitter
- Have Max 8 installed
- Familiarity with audio-synthesis or computer graphics would be useful.
About the workshop leader
Kevin Kripper (Buenos Aires, 1991) is a visual artist and indie software developer. He’s worked on several projects that link art, technology, education and toolmaking which has exhibited in festivals such as +CODE, Innovar, Wrong Biennale, MUTEK, among others. In 2016 he won first place at the Itaú Visual Arts Award with his work Deconstrucento. In addition, since 2012 he’s been dedicated to create digital tools that extend the creative possibilities of visual artists and musicians from all over the world. During 2017, he participated in the Toolmaker residency at Signal Culture (Owego, NY) and in 2018 received a mention in the Technology applied to Art category from the ArCiTec Award for the development of Vsynth.
https://www.instagram.com/vsynth74/
https://cycling74.com/articles/an-interview-with-kevin-kripper