An introduction to Flora for monome norns – On-demand
Level: Some experience of norns required
Flora is an L-systems sequencer and bandpass-filtered sawtooth engine for monome norns. In this workshop you will learn how L-system algorithms are used to produce musical sequences while exploring the script’s UI and features.
By the end of the first workshop, you will be able to:
-
Navigate the Flora UI and parameters menus to build and perform your own compositions
-
Create dynamically shaped, multinodal envelopes to modulate Flora’s bandpass-filtered sawtooth engine
-
Build generative polyrhythms and delays into your compositions
-
Use crow and/or midi-enabled controllers and synthesizers to play Flora
Session study topics:
-
Sequencing with L-system algorithms
-
Physical modeling synthesis with bandpass filters
-
Generate multi-nodal envelope
-
Norns integration with midi and/or crow
Requirements
-
A computer and internet connection
-
A norns device with Flora installed
-
Optional: A midi-enabled controller and/or synthesizer
We have a number of sponsorship places available, if the registration fee is a barrier to you joining the workshop please contact laura@stagingmhs.local.
About the workshop leader
Jonathan Snyder is a Portland, Oregon based sound explorer and educator.
Previously, he worked for 22 years as a design technologist, IT manager, and educator at Columbia University’s Media Center for Art History, Method, and Adobe.
TouchDesigner meetup 15th May
Date & Time: Saturday 15th May 5pm UK / 6pm Berlin / 9am LA / 12pm NYC
Meetup length 2-hours
Level: Open to all levels
Meetups are a great way to meet and be inspired by the TouchDesigner community.
What to expect?
The meetup runs via Zoom and is 2-hours in length.
This session focuses on Typography and Graphic Design and features presentations from:
1. Jash: 36 Days Of Type
Jash is a Canadian/American photographer, videographer, and motion graphics artist. Jash recently challenged himself to participate in the annual 36 days of type challenge; Creating a generative composition in touch designer every day for 36 days. https://justjash.com/
2. Caroline Reize: Design for New Media: An introduction to design and its application in new media
Caroline is a German-born media artist who makes experimental visuals based on minimalism. Her work focuses on drawing inner emotion to create an image. The shapes and colors that are obtained from this allow the viewer a new experience. To develop a sufficient multi-sensory experience, she utilizes various forms of media, collaborates with sound artists, and works on narrowing the gap between art and technology. http://carolinereize.com/
3. Hugues Kir: The “Frog Effect”
A benevolent approach to graphic design in TouchDesigner
https://derivative.ca/p/62998 @smooth_isfast
Following these presentations breakout rooms are created where you can:
-
Talk to the presenters and ask questions
-
Join a room on topics of your choice
-
Show other participants your projects, ask for help, or help others out
-
Meet peers in the chill-out breakout room
Requirements
- A computer and internet connection
- A Zoom account
Berlin Code of Conduct
We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.
Supported by
![]()
Livestream: Nestup – A Language for Musical Rhythms
Date & Time: Monday 10th May 6pm UK / 7pm Berlin / 10am LA / 1pm NYC
In this livestreamed interview, we will speak with Sam Tarakajian and Alex Van Gils, who’ve built a fantastic live-coding environment that works within an Ableton Live device called Nestup.
The programs we use to make music have a lot of implicit decisions baked into them, especially in their graphical interfaces. Nestup began as a thought experiment, trying to see if embedding a text editor inside Live could open up new creative possibilities. We think the answer is that yes, text can work well alongside a piano roll and a traditional musical score, as a concise and expressive way to define complex rhythms.
With Nestup, you define for yourself any size of rhythmic unit, any sort of rhythmic subdivision, and with any scaling factor. These language features open your rhythm programming up to musical ideas such as metric modulation, nested tuplets, complex polyrhythm, and more. Rhythms from musical styles which would have been prohibitively difficult to program in a DAW can therefore be rendered in MIDI, such as rhythms from Armenian folk musics or “new complexity” compositions.
Overview of speakers
Sam is a Brooklyn based developer and creative coder. Sam works for Cycling ‘74 and develops independent projects at Cutelab NYC. Alex is a composer, performer, and generative video artist based in Brooklyn.
Sam and Alex have been making art with music and code together for over 10 years, beginning with a composition for double bass and Nintendo Wiimote while undergraduates and continuing to include electroacoustic compositions, live AR performance art, installation art, Max4Live devices, and now Nestup, the domain-specific language for musical rhythms.
Where to watch?
YouTube –
Creative Audio and MIDI in Ableton Live – On-demand
If you’d like to support the Music Hackspace to continue to build a program of free workshops, a voluntary contribution would be much appreciated.
Level: Intermediate
Ableton Live offers a vast playground of musical opportunities to create musical compositions and productions. These include converting audio based harmony, melody and rhythm to MIDI, alongside techniques such as slicing audio into sampling tools which can be triggered via MIDI. In this workshop you will creatively explore and deploy a range of Audio and MIDI manipulation tools in a musical setting. This workshop aims to provide you with suitable skills to utilise the creative possibilities of Audio and MIDI manipulation in the Ableton Live environment.
Session Learning Outcomes
By the end of this session a successful student will be able to:
-
Convert Audio to MIDI
-
Slice Audio to MIDI
-
Manipulate Audio via MIDI slices
-
Utilise Audio and MIDI to create novel musical and sonic elements
Session Study Topics
-
Converting Audio to MIDI
-
Slicing Audio to MIDI
-
Manipulating slices within Simpler
-
Creatively using Audio and MIDI
Requirements
-
A computer and internet connection
-
Access to a copy of Live Suite (i.e. trial or full license)
About the workshop leader
Anna is a London based producer, engineer, vocalist and educator.
Anna is currently working as a university lecturer in London, teaching music production, creating educational content and working on her next releases as ANNA DISCLAIM.
Discover the new features in Max for Live 11 – On demand
Level: Intermediate
MaxforLive allows users to develop their own devices for use in composition, performance and beyond. In the recent release of Live Suite 11 there are a myriad of new features and tools for musicians and programmers alike. In this workshop you will explore these new tools and features and be able to leverage them in your own musical works and patches.
By the end of this session a successful student will be able to:
-
Explore new MPE possibilities
-
Utilise the new devices
-
Identify the new integrations and objects
-
Understand and deploy the new features for developers
Session Study Topics
-
MPE and Max for Live
-
New Max for Live devices
-
New integrations and objects in Max for Live with Live 11
-
New features for developers of Max for Live devices with Live 11
Requirements
-
A computer and internet connection
-
Access to a copy of Live 11 Suite & Max for Live (i.e. trial or full license)
About the workshop leader
Mark Towers is an Ableton Certified Trainer and a lecturer in music technology at Leicester College. He specialises in Max for Live, as well as working with Isotonik Studios to create unique and creative devices for music production and performance such as the Arcade Series.
Supported by
Max and Machine Learning with RunwayML – On-demand
Level: Intermediate
RunwayML is a platform that offers AI tools to artists without any coding experience. Max/MSP is a visual programming environment used in media art that can be used to control RunwayML in a more efficient way. At the end of the workshop you will be able to train trendy machine learning models and generate videos by walking a latent space through Max and NodeJS.
Session Learning Outcomes
By the end of the course a successful student will be able to:
-
Understand the RunwayML workflow
-
Use Node4Max to control RunwayML and generate a video.
-
Explore ML trendy models
-
Create a Dataset
-
Train a ML model
-
Process videos with the VIZZIE library.
Session 1
– Introduction to the course
– What’s machine learning, deep learning and neural networks?
– What’s RunwayML?
Session 2
– What’s a GAN and styleGAN?
– Latent space walk
– Image and video generation with RunwayML, Max and Node4Max (part 1)
Session 3
Session 4
– processing Images and videos with VIZZIE2 and Jitter.
Session Study Topics
-
Generate images and video through AI
-
Request data to models and save images on your local drive
-
Generate video from images
-
Communication protocols (web sockets and https requests)
-
AI models used in visual art.
-
Video processing
-
Models training
Requirements
-
A computer and internet connection
-
Access to a copy of Max 8 (either trial or licence)
- A code editor such as Visual Studio Code, Sublime or Atom
- Attendees need to create a RunwayML account – https://app.runwayml.com/signup.
- Upon setting up an account you will receive 10$ credit for free
- Approx. 50$ credits will be required to complete the course however these do not need to purchased in advance
- 20% RunwayML discount code will be provided to participant who sign up to the course
About the workshop leader
Marco Accardi is a trained musician, multimedia artist, developer and teacher based in Berlin.
He is the co-founder of Anecoica, a collective that organises events combining art, science and new technologies.
Android Audio Development Fundamentals – On-demand
Level: Intermediate
Android is the leading mobile operating system, with billions of active devices worldwide. In this workshop you will learn the fundamental principles needed to create high performance audio apps on the platform. From the basic setup to the creation of a sequencer based app, we will cover every aspect you need to build your own version of what a great Android audio application should be.
By the end of this series a successful student will be able to:
-
Be familiar with the Android development environment
-
Understand the logic behind real time audio processing app on the platform
-
Create GUI controls to interact with the sound
-
Implement a sequencer based application
Study topics:
-
Android Studio
-
Native project structure (JNI, CMake)
-
Oboe library usage
-
Android Layout Editor
# Session 1: Hello world
- Setting up Android Studio
- Build hello world code
- Emulator
- USB debugging/apk deliverable
# Session 2: Basic tone generation
- Native project logic (JNI/CMake)
- Oboe setup
- Basic sine wave processing
# Session 3: Parameters and controls
- Layout editor
- Bypass button
- Sine wave frequency/volume sliders
- Custom UI component (knob)
# Session 4: Sequencer app
- GUI: play button + 4 step on/off + 4 pitch sliders
- Audio engine: associated processing code
- Visual feedback from engine (C++ to Java calls)
- Sequencer playhead position feedback
Requirements
-
A computer and internet connection
-
A webcam and mic
-
A Zoom account
-
A basic familiarity with Java or C++ and audio processing
-
An Android phone or tablet
-
A usb cable to connect the phone/tablet to your computer
About the workshop leader
Baptiste Le Goff is a french software engineer focused on electronic music instruments design and implementation.
After 6 years working for Arturia – moving from software development to product management – he founded Meteaure Studios to build music making apps for Android and empower the next generation of mobile producers.
Supported by Android
Livestream: TidalCycles – growing a language for algorithmic pattern
Thursday 20th May 6pm UK / 7pm Berlin / 10am LA / 1pm NYC
In this livestreamed interview, Alex McLean retraces the history and intent that prompted him to develop TidalCycles alongside ‘Algorave’ live performance events, contributing to establish Live Coding as an art discipline.
Alex started TidalCycles project for exploring musical patterns in 2009, and it is now a healthy free/open-source software project and among the most well-known live coding environments for music.
TidalCycles represents musical patterns as a function of time, making them easy to make, combine and transform. It is generally partnered with the SuperDirt hybrid synthesiser/sampler, created by Julian Rohrhuber using SuperCollider.
Culturally, TidalCycles is tightly linked to Algorave, a movement created by Alex McLean and Nick Collins in 2011, where musicians and VJs make algorithms to dance to.
Where to watch –
Facebook – https://www.facebook.com/musichackspace/
Overview of speaker
Alex McLean is a musician and researcher based in Sheffield UK. As well as working on TidalCycles, he also researches algorithmic patterns in ancient weaving, as part of the PENELOPE project based in Deutsches Museum, Munich. He has organised hundreds of events in the digital arts, including the annual AlgoMech festival of Algorithmic and Mechanical Movement. Alex co-founded the international conferences on live coding and live interfaces, and co-edited the Oxford Handbook of Algorithmic Music. As live coder has performed worldwide, including Sonar, No Bounds, Ars Electronica, Bluedot and Glastonbury festivals.
Getting Started with Gen – On-demand
Level: Intermediate / Previous experience with MSP is required.
Build highly efficient signal processing operations in Max using Gen~. In this series of 4 workshops, you will learn the fundamentals of signal processing and develop skills to confidently code with Gen~ in Max. The course contains 24 custom-made example patches along with audio samples that you will build as exercises during the course and be able to use in your own projects.
Series Learning Outcomes
By the end of this series a successful student will be able to:
-
Become familiar with the Gen~ environment
-
Build various audio processing tools via Gen~ (i.e. delay FX, AM and FM tools)
-
Construct basic Gen~ sampling and synthesis tools
-
Apply a myriad of Gen~ operators
Series Study Topics
-
The Gen~ environment
-
Audio processing in Gen~
-
Gen~ sampling and synthesis tools
-
Gen~ operators and data management
Requirements
-
A computer and internet connection
-
Access to a copy of Max 8 (i.e. trial or full license)
About the workshop leader
Phelan Kane is a Berlin & London based music producer, engineer, artist, developer and educator.
He is currently running the electronic music record label Meta Junction Recordings and the audio software development company Meta Function. He has released the Max for Live device synth Wave Junction in partnership with Sonicstate.
Granular Synthesis: Getting Started with Grainflow – On-demand
Level: Advanced
Grainflow is a package for Max / MSP that utilizes gen~ and the MC wrapper to allow users to control large numbers of sample accurate grains. This workshop will teach participants about how to use and control large numbers of grains using the Grainflow package and Max’s multichannel wrapper.
By the end of the session students should be able to:
-
Develop an understanding of granulation and granular synthesis
-
Use Grainflow to build a granular file player
-
Use Grainflow for live granulation.
-
Build a granular time-stretching tool
-
Use the MC output of Grainflow to bus grains stochastically to different effects
Study Topics
-
Introduction to Grainflow~ – parameters and controls
-
Building a sound-file granulator using grainflow~
-
Building a live granulator using grainflow~.
-
Build a real-time time stretcher
-
Build a system for statistically bussing grains into several effects.
Requirements
-
A computer and internet connection
-
Access to a copy of Max 8
-
The Grainflow Package
About the workshop leader
Christopher Poovey is a Dallas based electroacoustic composer, media artist, and developer.
He is currently a PhD candidate at the University of North Texas with a research focus in interactive computer music and immersive installation. Chris has developed several software packages for Max as well as a number of Max for Live devices and VST instruments built using
