Notch meetup – Using Notch for live performances / October 30th 2021

Date & Time: Saturday 30th October 4pm UK / 5pm Berlin / 8am LA / 11am NYC

Meetup length 2-hours

Level: Open to all levels

Meetups are a great way to meet and be inspired by the Notch community.

What to expect? 

The meetup runs via Zoom and will be approx. 2-hours in length.

This session focuses on Notch for live performances and will feature presentations from expert practitioners.

  • Mikkel G.Martinsen & Lorenzo Venturini: NOTCH Imag FX integration in live concerts productions

www.roofvideodesign.com
www.mikkel.it
www.lorenzoventurini.com

 

  • More speakers to be announced this week!

Following these presentations breakout rooms are created where you can: 

  • Talk to the presenters and ask questions

  • Join a room on topics of your choice

  • Show other participants your projects, ask for help, or help others out

  • Meet peers in the chill-out breakout room

The list of presenters will be updated and announced before the meetup.

Requirements

  • A computer and internet connection
  • A Zoom account

 Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

Supported by Notch 

Interaction with Arduino & Max – Workshop series / On-demand

Pricing excluding kit, components to be purchased separately – see kit list in requirements section 

Level: Beginner-Intermediate

Want your Arduino to control audio, video, generative 3D visuals, or even Ableton Live? Combine Arduino with Max 8, a powerful visual programming environment opens up many possibilities for interactive installation, generative art, multimedia performance, and more! You will learn very basic electronics, introductory Arduino skills, and how to use sensors and inputs to control Max 8!

By the end of this workshop series you will be able to:

  • Create Arduino based electronic prototypes

  • Control audio in Max 8 with sensors and your own custom hardware interfaces

  • Utilize Max to map software interactions to physical electronic systems

  • Apply interaction design concepts for developing installations and performances

Session Study Topics: 

  • Max 8 to control Digital and PWM output

  • Switches and Digital inputs mapped to states in Max 8

  • Analog sensors, smoothing data and creating meaningful interactions

  • Strengths, weaknesses and limitations of hardware and software.

Requirements

About the workshop leader:

Kyle Duffield is a Toronto based Interactive and Experience Design Professional who creates immersive interactive installations and brand activations. He is also known for his affiliation with the studio and former gallery, Electric Perfume. As an educator, and technical consultant, he has facilitated interactive media workshops and projects with institutions across Canada, Shanghai, and online. Currently, Kyle is participating in Cycling 74’s Max Certified Trainer Program, and is focusing on creating unforgettable technological experiences.

Audio Reactive Shaders with Shader Park + TouchDesigner – On-demand

Level: Intermediate: Some previous experience with javascript, or programming is recommended. No experience with TouchDesigner is needed.

Overview

Explore 3D shader programming through a Javascript interface without the complexity of GLSL. Shader Park is an open source project for creating real-time graphics and procedural animations. Follow along through multiple examples using a P5.js style live code editor. Expand upon the examples and bring them into TouchDesigner to create your own audio reactive music visualizers. Explore the Shader Park community gallery where you can fork other people’s creations or feature your own.

Who is the workshop for?

Artists interested in exploring real-time procedural 3D graphics and animations applied as music visualizations.

Developers with experience programming in javascript, a p5.js style library, or similar language is recommended. Bonus if you know shader programming.

By the end of this session a successful student will be able to:

  • Create raymarched 3D shaders with Shader Park

  • Intro to TouchDesigner + Shader Park plugin

  • Overview of audio analysis

  • Create your own audio reactive music visualizer

Session Study Topics:

  • TouchDesigner

  • Shader Park

  • Shaders

  • Raymarching

  • Audio analysis

Requirements

  • A computer and internet connection

  • A copy of TouchDesigner

  • A downloaded song(.mp3 .aif .aiff .wav) you’d like to turn into a music visualizer.

About the workshop leader

Torin Blankensmith and Peter Whidden formed a creative-coding organization while at college together. They hosted weekly workshops for students from various disciplines on emerging topics in computer graphics. It was in this group that the first prototype of Shader Park was developed.

Torin is a freelance creative technologist and adjunct professor at the Parsons School of Design in New York City teaching TouchDesigner and Creative Coding. Based out of NEW INC, Torin creates immersive installations, experiences, interfaces, and websites. Torin’s work explores emerging techniques in real-time graphics pulling inspiration from systems and patterns of emergent behavior in nature. This work has translated into creating large scale environments for medical professionals to alleviate stress/burnout, for patients in clinical studies aiding in neuroscience research on brain recovery, and for commercial spaces bringing sanctuary to the urban landscape.

Peter is a creative software engineer whose work spans physics, astronomy, machine learning, and computer graphics. He currently works at the NY Times R&D lab focused on emerging computer vision and graphics techniques. Previously, Pete has worked with CERN to build interactive 3D visualizations used in particle physics. He’s also developed software with the Data Intensive Research in Astrophysics and Cosmology (DIRAC) Institute which enabled the discovery of over 30 new minor planets in the Kuiper belt. Recently his artistic collaborations with Alex Miller of SpaceFiller have been featured in galleries and in a permanent installation in Seattle.

Natural Machines with Dan Tepfer – LIVESTREAM

Date & Time: Thursday 17th June 2021 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

In this live stream we’ll talk with Dan Tepfer and hear more about his project Natural Machines.

In an age of unprecedented technological advancement, Dan Tepfer is changing the definition of what a musical instrument can be. Featured in an NPR documentary viewed by 1.5 million people, Dan Tepfer shows his pioneering skill in this concert by programming a Yamaha Disklavier to respond in real time to the music he improvises at the piano while another computer program turns the music into stunning animated visual art. Called “fascinating and ingenious” by Rolling Stone, the Natural Machines performance lives at a deeply unique intersection of mechanical and organic processes, making it “more than a solo piano album… a multimedia piece of contemporary art so well made in its process and components and expressed by such a thoughtful, talented, evocative pianist… that it becomes a complete experience” (NextBop).

Music Hackspace YouTube 

Overview of speaker

Dan Tepfer is a French-American jazz pianist and composer.

One of his generation’s extraordinary talents, Dan Tepfer has earned an international reputation as a pianist-composer of wide-ranging ambition, individuality, and drive—one “who refuses to set himself limits” (France’s Télérama). The New York City-based Tepfer, born in 1982 in Paris to American parents, has performed around the world with some of the leading lights in jazz and classical music, and released ten albums of his own.

Tepfer earned global acclaim for his 2011 release Goldberg Variations / Variations, a disc that sees him performing J.S. Bach’s masterpiece as well as improvising upon it—to “elegant, thoughtful and thrilling” effect (New York magazine). Tepfer’s newest album, Natural Machines, stands as one of his most ingeniously forward-minded yet, finding him exploring in real time the intersection between science and art, coding and improvisation, digital algorithms and the rhythms of the heart. The New York Times has called him “a deeply rational improviser drawn to the unknown.”

Tepfer’s honors include first prizes at the 2006 Montreux Jazz Festival Solo Piano Competition, the 2006 East Coast Jazz Festival Competition, and the 2007 American Pianists Association Jazz Piano Competition, as well as fellowships from the American Academy of Arts and Letters (2014), the MacDowell Colony (2016), and the Fondation BNP-Paribas (2018).

Immersive AV Composition -On demand / 2 Sessions

Level: Advanced

These workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Setup and use the ImmersAV toolkit

  • Discuss techniques for rendering material on VR headsets

  • Implement the Csound API within a C++ application

  • Create mixed raymarched and raster based graphics

  • Create an interactive visual scene using a single fragment shader

  • Generate the mandelbulb fractal

  • Generate procedural audio using Csound

  • Map controller position and rotation to audiovisual parameters using machine learning

Session Study Topics

  • Native C++ development for VR

  • VR rendering techniques

  • Csound API integration

  • Real-time graphics rendering techniques

  • GLSL shaders

  • 3D fractals

  • Audio synthesis

  • Machine learning

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Cloned copy of the ImmersAV toolkit plus dependencies

  • VR headset capable of connecting to SteamVR

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Visual Music Performance with Machine Learning – On demand

Level: Intermediate

In this workshop you will use openFrameworks to build a real-time audiovisual instrument. You will generate dynamic abstract visuals within openFrameworks and procedural audio using the ofxMaxim addon. You will then learn how to control the audiovisual material by mapping controller input to audio and visual parameters using the ofxRapid Lib add on.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Create generative visual art in openFrameworks

  • Create procedural audio in openFrameworks using ofxMaxim

  • Discuss interactive machine learning techniques

  • Use a neural network to control audiovisual parameters simultaneously in real-time

Session Study Topics

  • 3D primitives and perlin noise

  • FM synthesis

  • Regression analysis using multilayer perceptron neural networks

  • Real-time controller integration

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Installed version of openFrameworks

  • Downloaded addons ofxMaxim, ofxRapidLib

  • Access to MIDI/OSC controller (optional – mouse/trackpad will also suffice)

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in generative approaches to creating audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths, University of London.

Video Synthesis with Vsynth for Max – LIVE Session

Dates: Thursdays 4th / 11th / 18th / 25th February 2021 6pm GMT

Level: Intermediate +

Overview

In this series of 4 workshops, we’ll look at how to interconnect the different 80 modules that come with Vsynth, exploring video techniques and practices that can create aesthetics associated with the history of the electronic image but also complex patterns founded in some basic functions of nature.

Vsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity made it the perfect tool to introduce yourself to video synthesis and image processing. Since It can be connected to other parts of Max, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job.

Here’s what you’ll learn in each workshop:

Workshop 1:

Learn the fundamentals of digital video-synthesis by diving into the different video oscillators, noise generators, mixers, colorizers and keyers. By the end of this session students will be able to build simple custom video-synth patches with presets.

  • Video oscillators, mixers, colorizers.

Workshop 2: 

  • Modulations (phase, frequency, pulse, hue, among others).

In this workshop we will focus on the concept of modulation so that students can add another level of complexity to their patches. We’ll see the differences between modulating parameters of an image with simple LFOs or with other images. Some of the modulations we’ll cover are Phase, Frequency, Pulse Width, Brightness & HUE.

Workshop 3:

  • Filters/convolutions and video feedback techniques.

  • This 3rd workshop is divided in two. In the first half, we’ll go in depth in what actually means low or high frequencies in the image world. We’ll then use Low-pass and High-pass filters/convolutions in different scenarios to see how they affect different images.

  • In the second, half we’ll go through a lot of different techniques that uses the process of video-feedback. From simple “trails” effects to more complex reaction-diffusion like patterns!

Workshop 4:

  • Working with scenes and external controllers (audio, midi, arduino).

  • In this final workshop we’ll see how to bundle in just one file several Vsynth patches/scenes with presets for live situations. We’ll also export a patch as a Max for Live device and go in depth into “external control” in order to successfully control Vsynth parameters with audio, midi or even an Arduino.

Requirements

  • Intermediate knowledge of Max and Jitter

  • Have latest Max 8 installed

  • Basic knowledge of audio-synthesis and/or computer graphics would be useful

About the workshop leader

Kevin Kripper (Buenos Aires, 1991) is a visual artist and indie software developer. He’s worked on projects that link art, technology, education and toolmaking, which have been exhibited and awarded in different art and science festivals. Since 2012 he’s been dedicated to creating digital tools that extend the creative possibilities of visual artists and musicians from all over the world.

About
Privacy