Max meetup 20th March – Americas Edition
Date & Time: Saturday 20th March 3pm PST / 6pm EST
Level: Open to all levels
Overview
Join the Max meetup to share ideas and learn with other artists, coders and performers. Showcase your patches, pair with others to learn together, get help for a school assignment, or discover new things.
The meetup runs via Zoom. The main session features short presentations from Max users. Breakout rooms are created on the spot on specific topics, and you can request a new topic at any time.
In the breakout rooms, you can share your screen to show other participants something you’re working on, ask for help, or help someone else.
Ready to present your work?
Everyone is welcome to propose a presentation. Just fill in this short form and you’ll be put on the agenda on a first come first served basis.
Presentations should take no more than 5 minutes with 5 minutes Q&A and we’ll have up to 5 presentations at each meetup.
List of presenters will be announced before each event.
Requirements
- A computer and internet connection
Berlin Code of Conduct
We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.
Immersive AV Composition -On demand / 2 Sessions
Level: Advanced
These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Setup and use the ImmersAV toolkit
-
Discuss techniques for rendering material on VR headsets
-
Implement the Csound API within a C++ application
-
Create mixed raymarched and raster based graphics
-
Create an interactive visual scene using a single fragment shader
-
Generate the mandelbulb fractal
-
Generate procedural audio using Csound
-
Map controller position and rotation to audiovisual parameters using machine learning
Session Study Topics
-
Native C++ development for VR
-
VR rendering techniques
-
Csound API integration
-
Real-time graphics rendering techniques
-
GLSL shaders
-
3D fractals
-
Audio synthesis
-
Machine learning
Requirements
-
A computer and internet connection
-
A web cam and mic
-
A Zoom account
-
Cloned copy of the ImmersAV toolkit plus dependencies
-
VR headset capable of connecting to SteamVR
About the workshop leader
Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.
Visual Music Performance with Machine Learning – On demand
Level: Intermediate
In this workshop you will use openFrameworks to build a real-time audiovisual instrument. You will generate dynamic abstract visuals within openFrameworks and procedural audio using the ofxMaxim addon. You will then learn how to control the audiovisual material by mapping controller input to audio and visual parameters using the ofxRapid Lib add on.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Create generative visual art in openFrameworks
-
Create procedural audio in openFrameworks using ofxMaxim
-
Discuss interactive machine learning techniques
-
Use a neural network to control audiovisual parameters simultaneously in real-time
Session Study Topics
-
3D primitives and perlin noise
-
FM synthesis
-
Regression analysis using multilayer perceptron neural networks
-
Real-time controller integration
Requirements
-
A computer and internet connection
-
A web cam and mic
-
A Zoom account
-
Installed version of openFrameworks
-
Downloaded addons ofxMaxim, ofxRapidLib
-
Access to MIDI/OSC controller (optional – mouse/trackpad will also suffice)
About the workshop leader
Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.
An Introduction to Markov Chains: Machine Learning in Max/MSP
Difficulty level: Beginner
Overview
Markov chains are mathematical models that have existed in various forms since the 19th century, which have been used to aid statistical modelling in many real-world contexts, from economics to cruise control in cars. Composers have also found musical uses for Markov Chains, although the implied mathematical knowledge needed to implement them often appears daunting.
In this workshop we will demystify the Markov Chain and make use of the popular ml.star library in Max/MSP to implement Markov Chains for musical composition. This will involve preparing and playing MIDI files into the system (as a form of Machine Learning) and capturing the subsequent output as new MIDI files. By the end of the session you will have the knowledge of how to incorporate Markov Chains into your future compositions at various levels.
Topics
- Max
- Markov Chains
- Machine Learning
- Algorithmic Composition
Requirements
- You should have a basic understanding of the Max workflow and different data types.
- Knowledge of MIDI format and routing to DAWs (Ableton, Logic etc) would be a plus, although Max instruments will be provided.
- No prior knowledge of advanced mathematical or machine learning concepts are necessary, the focus will be on musical application.
About the workshop leader
Samuel Pearce-Davies is a composer, performer, music programmer and Max hacker living in Cornwall, UK.
With a classical music background, it was his introduction to Max/MSP during undergraduate studies at Falmouth University that sparked Sam’s passion for music programming and algorithmic composition.
Going on to complete a Research Masters in computer music, Sam is now studying a PhD at Plymouth University in music-focused AI.