Build a MIDI 2.0 program using the Apple UMP API – Workshop 2 / December 6th

Date & Time: Monday 6th December 2021 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

This workshop builds on the first UMP Workshop, and focuses on C++ development using the new Apple UMP API. Automatic 20% discount will be applied at checkout to this workshop if purchased at the same time as the first workshop.

2-hours

Difficulty level: Advanced

  • Inspect the new Apple UMP API
  • What can be done with the API, where are limitations?
  • Build a simple UMP program in C++

Overview

This workshop builds on Workshop 1, and will provide developers with knowledge and code for implementing MIDI 2.0 Universal MIDI Packet (UMP) development using the Apple UMP API in C++. The Apple UMP API will be presented and explained. Then, the participants will co-develop a simple implementation in C++ using the Apple UMP API. For that, a stub workspace will be provided. Exercises will let the participants practice the newly learned concepts. Xcode on MacOS 11 required for building the workshop code.

Learning outcomes

At the end of the workshop the participants will:

  • Be able to build MIDI 2.0 products using UMP using the Apple UMP API

Study Topics

  • Looking at the Apple UMP API
  • Extending the code from Workshop 1 with Apple i/o
  • Presenting fragments of the code in the stub workspace
  • Testing and interoperability with MIDI 1.0

Level of experience required

  • Attendees who joined workshop 1 <add link>
  • Some experience with C++ coding required
  • Attendees should be familiar with MIDI 1.0; they should have experience building and debugging applications using Xcode (macOS)

Any technical requirements for participants 

  • A computer and internet connection
  • A webcam and mic
  • A Zoom account
  • for development: Xcode on MacOS 11

About the workshop leader 

Florian Bomers runs his own company Bome Software, creating MIDI tools and hardware. He has been an active MIDI 2.0 working group member since its inception. He serves on the Technical Standards Board of the MIDI Association and chairs the MIDI 2.0 Transports Working Group. He is based in Munich, Germany.

MIDI 2.0 – Introduction to the Universal MIDI Packet – Workshop 1 / November 29th

Date & Time: Monday 29th November 2021 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

This workshop is followed by two more workshops exploring the specific implementations with Apple UMP API and the JUCE UMP API (cross-platform). Automatic 20% discount on workshop 2 and/or 3 will be applied when purchased with this workshop.

2- hours

Difficulty level: Advanced

MIDI 2.0 is set to power the next generation of hardware and software with enhanced features for discovery, expression and faster communication. The Universal MIDI Packet (UMP) is a fundamental aspect of MIDI 2.0, which allows programs to negotiate and communicate with MIDI 1.0 and MIDI 2.0 products.

In this workshop, you will learn from a member of the MIDI Association Technology Standard Board, who wrote the specifications, how to get started working with UMP, and write a simple C++ program that utilises UMP.

Overview

This workshop will provide developers with knowledge and code for starting MIDI 2.0 Universal MIDI Packet (UMP) development in C++. The concepts of UMP will be explained. Then, the participants will co-develop a first simple implementation of a generic UMP parser in plain C++. For that, a stub workspace will be provided. Exercises will let the participants practice the newly learned concepts.

Who is this workshop for:

Developers wanting to learn how the new MIDI 2.0 packet format works under the hood, and how to get started writing software for it right away.

Learning outcomes

At the end of the workshop the participants will:

  • Understand the core concepts of UMP
  • Be able to build applications in C++ using UMP

Study Topics

  • UMP Basics
  • packet format
  • MIDI 1.0 in UMP
  • MIDI 2.0 in UMP
  • Translation
  • Protocol Negotiation in MIDI-CI
  • Inspecting the UMP C++ class in the stub workspace
  • A simple UMP parser in C++
  • Unit Testing the UMP class

Level of experience required: 

  • Some experience with C++ coding
  • Have a development environment set up and ready with Xcode (macOS) or Visual Studio (Windows).
  • Working knowledge of MIDI 1.0

Any technical requirements for participants 

  • A computer and internet connection
  • A webcam and mic
  • A Zoom account
  • Xcode (macOS) / Visual Studio (Windows)

About the workshop leader 

Florian Bomers runs his own company Bome Software, creating MIDI tools and hardware. He has been an active MIDI 2.0 working group member since its inception. He serves on the Technical Standards Board of the MIDI Association and chairs the MIDI 2.0 Transports Working Group. He is based in Munich, Germany.

Getting started with Interactive Machine Learning for openFrameworks – On-demand

Level: Intermediate – C++ required

Using openFrameworks, ofxRapidLib and ofxMaximilian, participants will learn how to integrate machine learning into generative applications. You will learn about the interactive machine learning workflow and how to implement classification, regression and gestural recognition algorithms.

You will  explore a static classification approach that employs the k-Nearest Neighbour (KNN) algorithm to categorise data into discrete classes. This will be followed by an exploration of static regression problems that will use multilayer perceptron neural networks to perform feed-forward, non-linear regression on a continuous data source. You will also explore an approach to temporal classification using dynamic time warping which allows you to analyse and process gestural input

This knowledge will allow you to build your own complex interactive artworks.

By the end of this series the participant will be able to:

Overall:

  • Set up an openFrameworks project for machine learning

  • Describe the interactive machine learning workflow

  • Identify the appropriate contexts in which to implement different algorithms

  • Build interactive applications based on classification, regression and gestural recognition algorithms

Session 1:

  • Set up an openFrameworks project for classification

  • Collect and label data

  • Use the data to control audio output

  • Observe output and evaluate model

Session 2:

  • Set up an openFrameworks project for regression

  • Collect data and train a neural network

  • Use the neural network output to control audio parameters

  • Adjust inputs to refine the output behaviour

Session 3:

  • Set up an openFrameworks project for series classification

  • Design gestures as control data

  • Use classification of gestures to control audio output

  • Refine gestural input to attain desired output

Session 4:

  • Explore methods for increasing complexity

  • Integrate visuals for multimodal output

  • Build mapping layers

  • Use models in parallel and series

Session Study Topics

Session 1:

  • Supervised Static Classification

  • Data Collection and Labelling

  • Classification Implementation

  • Model Evaluation

Session 2:

  • Supervised Static Regression

  • Data Collection and Training

  • Regression Implementation

  • Model Evaluation

Session 3:

  • Supervised Series Classification

  • Gestural Recognition

  • Dynamic Time Warp Implementation

  • Model Evaluation

Session 4:

  • Data Sources

  • Multimodal Integration

  • Mapping Techniques

  • Model Systems

Requirements

  • A computer with internet connection

  • Installed versions of the following software:

    • openFrameworks

    • ofxRapidLib

    • ofxMaxim

  • Preferred IDE (eg. XCode / Visual Studio)

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in using machine learning to create audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He is close to completion of his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Getting started with MIDI 2.0 development – On-demand

If you’re looking to book multiple tickets, please contact us for an invoice at info@stagingmhs.local

Level: Intermediate, Some experience with C++ coding required, Experience with JUCE recommended

To make the most of this on-demand workshop, participants should have experience building and debugging applications using Xcode (macOS) and Visual Studio (Windows).

Who is this course for:

Developers wanting to learn how MIDI 2.0 works under the hood, and how to get started writing software for it right away

Overview of what participants will learn:

This course will provide developers with knowledge and code for starting MIDI 2.0 development. At first, the concepts of MIDI 2.0 are explained. Then, the participants will co-develop a first implementation of a MIDI-CI parser for robust device discovery, and for querying and offering profiles. For that, a stub workspace will be provided. Exercises will let the participants practice the newly learned concepts. Last, but not least, this course also includes automated testing as a tool to verify the implementation.

Part 1: Overview of MIDI 2, concepts

  • MIDI-CI, Profiles, protocol negotiation, PE, UMP
  • Concepts
  • Tools
  • MIDI-CI Message Layout

Part 2: Workspace setup, Basic MIDI 2.0 Discovery

  • Workspace setup
  • Starting with a unit test
  • Implementing a MIDI 2.0 message parser
  • Implement MIDI 2.0 discovery

Part 3: Advanced MIDI 2.0 discovery and tests

  • making the parser more robust
  • MUID collision handling
  • Multi-port and MIDI Thru issues
  • unit tests + implementation

Part 4: Implementing Profiles. Outlook PE and UMP.

  • Use Cases
  • Sending and receiving Profile messages
  • Implementation and tests
  • Quick introduction to PE and to UMP

 

At the end of the course series, the participants will:

  • Know the core concepts of MIDI 2.0
  • Understand the MIDI 2.0 discovery protocol
  • Be able to build products with MIDI 2.0 discovery
  • Be able to build products using MIDI 2.0 Profiles
  • Use an initial set of MIDI 2.0 unit tests

Requirements

A computer and internet connection

Xcode (macOS)/Visual Studio (Windows)

JUCE workspace

About the course leaders

Brett Porter is Lead Software Engineer at Artiphon, member of the MIDI Association Executive Board, and chair of the MIDI 2 Prototyping and Testing Working Group. He is based in the New York City area.

Florian Bomers runs his own company Bome Software, creating MIDI tools and hardware. He has been an active MIDI 2.0 working group member since its inception. He serves on the Technical Standards Board of the MIDI Association and chairs the MIDI 2.0 Transports Working Group. He is based in Munich, Germany.

Android Audio Development Fundamentals – On-demand

Level:  Intermediate

Android is the leading mobile operating system, with billions of active devices worldwide. In this workshop you will learn the fundamental principles needed to create high performance audio apps on the platform. From the basic setup to the creation of a sequencer based app, we will cover every aspect you need to build your own version of what a great Android audio application should be.

By the end of this series a successful student will be able to:

  • Be familiar with the Android development environment

  • Understand the logic behind real time audio processing app on the platform

  • Create GUI controls to interact with the sound

  • Implement a sequencer based application

Study topics: 

  • Android Studio

  • Native project structure (JNI, CMake)

  • Oboe library usage

  • Android Layout Editor

# Session 1: Hello world

  • Setting up Android Studio
  • Build hello world code
  • Emulator
  • USB debugging/apk deliverable

# Session 2: Basic tone generation

  • Native project logic (JNI/CMake)
  • Oboe setup
  • Basic sine wave processing

# Session 3: Parameters and controls

  • Layout editor
  • Bypass button
  • Sine wave frequency/volume sliders
  • Custom UI component (knob)

# Session 4: Sequencer app

  • GUI: play button + 4 step on/off + 4 pitch sliders
  • Audio engine: associated processing code
  • Visual feedback from engine (C++ to Java calls)
  • Sequencer playhead position feedback

Requirements

  • A computer and internet connection

  • A webcam and mic

  • A Zoom account

  • A basic familiarity with Java or C++ and audio processing

  • An Android phone or tablet

  • A usb cable to connect the phone/tablet to your computer

About the workshop leader

Baptiste Le Goff is a french software engineer focused on electronic music instruments design and implementation.

After 6 years working for Arturia – moving from software development to product management – he founded Meteaure Studios to build music making apps for Android and empower the next generation of mobile producers.

 

 

Supported by Android

 

Building phaser audio effects in Gen – LIVE Session

Date & Time: Tuesday 16th March 2021 6pm GMT / 7pm CET / 10am PST / 1pm EST

Level: Advanced

In this workshop, you will explore tools and techniques to create phaser audio effect devices in Gen via Max. Explore all pass filters, feedback loops, signal routing and LFOs via a series of exercises. This workshop aims to enrich your musical output via the application of a self-made audio effects and novel sound design techniques. Gen provides highly optimised audio processes that matches C++ and is the ideal technology for improving complex Max patches and optimising CPU.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Identify key Gen objects for audio phasor effects devices

  • Build all pass filter devices with feedback networks

  • Configure Gen parameters and properties

  • Add LFO networks for filter modulation

Session Study Topics

  • Gen objects

  • All pass filters

  • Gen variables and parameters

  • LFO modulation sources

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a copy of Max 8 (i.e. trial or full license)

About the workshop leader

Phelan Kane is a Berlin & London based music producer, engineer, artist, developer and educator.

He is currently running the electronic music record label Meta Junction Recordings and the audio software development company Meta Function. He has released the Max for Live device synth Wave Junction in partnership with Sonicstate.

Immersive AV Composition -On demand / 2 Sessions

Level: Advanced

These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Setup and use the ImmersAV toolkit

  • Discuss techniques for rendering material on VR headsets

  • Implement the Csound API within a C++ application

  • Create mixed raymarched and raster based graphics

  • Create an interactive visual scene using a single fragment shader

  • Generate the mandelbulb fractal

  • Generate procedural audio using Csound

  • Map controller position and rotation to audiovisual parameters using machine learning

Session Study Topics

  • Native C++ development for VR

  • VR rendering techniques

  • Csound API integration

  • Real-time graphics rendering techniques

  • GLSL shaders

  • 3D fractals

  • Audio synthesis

  • Machine learning

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Cloned copy of the ImmersAV toolkit plus dependencies

  • VR headset capable of connecting to SteamVR

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Building Audio FX in Gen

Level: advanced

Date: 19th November 2020, 6pm GMT

In this workshop, you will explore tools and techniques to create bespoke audio FX tools in Gen via Max. Explore delay effects, circular buffers, modulation delays, LFOs, and multi-tap delays via a series of exercises. This workshop aims to enrich your musical output via the application of self-made audio FX and novel sound design techniques. Gen provides highly optimised audio processes that matches C++ and is the ideal technology for improving complex Max patches and free CPU.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Identify key Gen objects for audio FX devices

  • Build delay line audio FX devices with feedback networks

  • Configure Gen parameters and properties

  • Add LFO networks for use in Flanger and Chorus audio FX devices

Session Study Topics

  • Gen objects

  • Circular buffers

  • Gen variables and parameters

  • LFO modulation sources

Requirements

  • A computer and internet connection

  • A good working knowledge of computer systems

  • An basic awareness of audio processing

  • Good familiarity with MSP

  • Access to a copy of Max 8 (i.e. trial or full license)

About the workshop leader

Phelan Kane is a Berlin & London based music producer, engineer, artist, developer and educator. For over twenty years he has been active in both the music industry and the contemporary music education sector, with a focus on electronic music and alternative bands.

His specialism is sound design and production techniques such as synthesis and sampling, alongside audio processing and plug-in development. His credits include collaborations with Placebo, Radiohead, Fad Gadget, Depeche Mode, Moby, Snow Patrol, Mute, Sony BMG, Universal, EMI and Warner Bros. He holds an MA in Audio Technology from the London College of Music, University of West London, an MSc in Sound & Music Computing at the Center for Digital Music at Queen Mary, University of London and in 2008 became one of the world’s first wave of Ableton Certified Trainers.

He is a member of the UK’s Music Producers Guild, holds a PG Cert in Learning & Teaching, is an Affiliate of the Institute for Learning, a Fellow of the Higher Education Academy and until recently was Chairman of the London Committee for the British Section of the Audio Engineering Society. He is currently running the electronic music record label Meta Junction Recordings and the audio software development company Meta Function, which specialize in Max for Live devices releasing the M4L synth Wave Junction in partnership with Sonicstate.

Build a web assembly synthesiser with iPlug 2

Learn to use iPlug2 C++ audio plugin framework to create a synthesiser that runs on the web.

iPlug2 is a new C++ framework that allows you to build cross-platform audio plug-ins, using minimal code. One of the exciting features of iPlug2 is that it lets you turn your plug-in into a web page that anyone can use without a DAW (see for example https://virtualcz.io). In this workshop participants will learn how to build a web based synthesiser using cloud based tools, and publish it to a GitHub pages website. We will look at some basic DSP in order to customise the sound of the synthesiser and we will also customise the user interface. The same project builds native audio plug-ins, although in the workshop we will focus on the web version.

Note from Oli: Even though the workshop might use lots of unfamiliar technologies, iPlug2 is designed to be simple to use and has many of the more confusing aspects of cross platform programming solved for you already. Don’t worry if the technology sounds scary, everyone should be able to build a custom synthesiser using the example projects and workflow.

Requirements

Useful links


About the workshop leader

Oli Larkin is an audio software developer and music technologist with over 15 years of experience developing plug-ins and plug-in frameworks. He has released his own software products and has collaborated with companies such as Roli, Arturia, Focusrite and Ableton. For many years he worked in academia, supporting audio research and sound art projects with his programming skills. Nowadays Oli is working as a freelancer, as well as focusing on his open source projects such as iPlug2

Learn to program amazing interactive particles systems with Jitter

In this workshop, you will learn to build incredible live videos with particles systems, using Max and Jitter.

Cycling’74 has recently released GL3, which ties together more closely Jitter with Open GL, and optimises use of the GPU. With this recent update available in the package manager, you can build highly performance videos without having to code them in C++.

Requirements

  • Latest version of Max 8 installed on Mac or Windows
  • A good working knowledge of Max is expected
  • Understanding of how the GEN environment works in Jitter
  • Some familiarity with textual programming languages
  • A knowledge of basic calculus is a bonus
  • The GL3 package installed
  • To install this package open the “Package Manager” from within Max, look for the GL3 package and click “install”.

What you will learn

Session 1, 20th October, 6pm UK / 10am PDT / 1pm EST:

– Introduction to GL3 features

– Quick overview of most of the examples in the GL3 package

– Build a simple particle system from scratch

– Explorations with gravity/wind

– Exploration with target attraction

Session 2, 27th October, 6pm UK / 10am PDT / 1pm EST:

– Improve particle system with rendering billboard shader

– Creation of a “snow” or “falling leaves” like effect

– Starting to introduce interactivity in the system

– Using the camera input

– Connecting sound to your patches

Session 3, 3rd November, 6pm UK / 10am PDT / 1pm EST:

– Improve the system interactivity

– Particles emitting from object/person outline taken from camera

– Create a particle system using 3D models and the instancing technique

– Transforming an image or a video stream into particles

Session 4, 10th November, 6pm UK / 10am PDT / 1pm EST:

– Introduction to flocking behaviours and how to achieve them in GL3

– Create a 3D generative landscape and modify it using the techniques from previous sessions

– Apply post-processing effects


About the workshop leader:

Federico Foderaro is an audiovisual composer, teacher and designer for interactive multimedia installations, author of the YouTube channel Amazing Max Stuff.
Graduated in Electroacoustic Musical Composition at the Licinio Refice Conservatory in Frosinone cum laude, he has lived and worked in Berlin since 2016.

His main interest is the creation of audiovisual works and fragments, where the technical research is deeply linked with the artistic output.
The main tool used in his production is the software Max/MSP from Cycling74, which allows for real-time programming and execution of both audio and video, and represents a perfect mix between problem-solving and artistic expression.

Beside his artistic work, Federico teaches the software Max/MSP, both online and in workshops in different venues. The creation of commercial audio-visual interactive installations is also a big part of his work life, having led in the years to satisfactory collaborations and professional achievements.

About
Privacy