BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Music Hackspace - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://musichackspace.org
X-WR-CALDESC:Events for Music Hackspace
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20190331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20191027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20200329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20201025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20210328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20211031T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20221030T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20210731T180000
DTEND;TZID=Europe/London:20210731T200000
DTSTAMP:20260417T235807
CREATED:20210628T121943Z
LAST-MODIFIED:20210728T153107Z
UID:10000985-1627754400-1627761600@musichackspace.org
SUMMARY:TouchDesigner meetup - LIVE Session / 31st July
DESCRIPTION:Date & Time: Saturday 31st July 4pm UK / 5pm Berlin / 8am LA / 11am NYC \nLevel: Open to all levels \nMeetups are a great way to meet and be inspired by the TouchDesigner community. \nWhat to expect?  \nThe meetup runs via Zoom\, the main session will be 2-hours in length with an additional hour open to the community for collaboration and sharing in breakout rooms. \nThis session focuses on Shaders and will feature presentations from TouchDesigner experts:  \nJosef Luis Pelz – Real-time cloth simulation in TD with GLSL \nWe’ll have a look at a versatile real-time cloth simulation. Besides showing some example results\, I’ll try to briefly explain how the system works and how I developed it step by step. It involves pixel\, vertex and compute shader. \nJosef is a creative coder\, generative art enthusiast and mathematician living and working in Berlin. With a background in mathematics and computer science\, he is linking his passion for creative problem solving and aesthetics. \nFor more info check out: instagram.com/josefluispelz/ https://twitter.com/JosefPelz https://josefluispelz.com/ \nLouise Lessél – Shader conversions from TouchDesigner to Raspberry Pi \nLouise will present an asset she created that allows you to quickly convert Shadertoy shaders to use in your Touchdesigner projects\, and to go one step further and use the shader in your Raspberry Pi projects by using Pi3D\, to run either screens or LED hub-75 matrixes. The asset is available in the Touchdesigner community assets. \nLouise Lessél is a Danish New Media artist and Creative Technologist based in New York. She creates digital projections and interactive light installations based on scientific facts and data input\, often exploring the limits of the human perceptual system or raising ecological awareness. \nFor more info checkout: Instagram: @louiselessel \nTorin Blankensmith & Peter Whidden –  Interactive 3D Shaders with Shader Park and TouchDesigner \nTorin and Peter will be showcasing their new plugin which allows you to use Shader Park within TouchDesigner to quickly script interactive shaders. Explore 3D shader programming through a Javascript interface without the complexity of GLSL. Shader Park is an open source project for creating real-time graphics and procedural animations. Follow along through multiple examples using a live code editor. Expand upon the examples and bring them into TouchDesigner to create your own interactive graphics. Browse the Shader Park community’s gallery where you can fork other people’s creations or feature your own. \nTorin is a freelance creative technologist\, teacher\, and real-time graphics artist focusing on mixed reality installations and interactive experiences. He currently works at Studio Elsewhere creating restorative immersive environments in collaboration with neuroscientists focused on patient and medical worker’s well being. \nPeter is a creative software engineer whose work spans physics\, astronomy\, machine learning\, and computer graphics. He currently works at the NY Times R&D lab focused on emerging computer vision and graphics techniques. \nFor more info checkout: Instagram: @blankensmithing\, @peterwhidden / Twitter: @tblankensmith \n  \nFollowing these presentations breakout rooms are created where you can:  \n\nTalk to the presenters and ask questions\nJoin a room on topics of your choice\nShow other participants your projects\, ask for help\, or help others out\nCollaborate with others \nMeet peers in the chill-out breakout room\n\n  \nRequirements  \n\n\nA computer and internet connection \n\n\nA Zoom account \n\n\n  \nBerlin Code of Conduct \nWe ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone. \n Supported by \n 
URL:https://musichackspace.org/event/touchdesigner-meetup-31st-july/
LOCATION:Online
CATEGORIES:Interactive video,Meet Ups,TouchDesigner,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2021/06/TD-July-meetup-updated.001-scaled.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20210617T180000
DTEND;TZID=Europe/London:20210617T200000
DTSTAMP:20260417T235807
CREATED:20210602T094841Z
LAST-MODIFIED:20210914T095003Z
UID:10000858-1623952800-1623960000@musichackspace.org
SUMMARY:Natural Machines with Dan Tepfer - LIVESTREAM
DESCRIPTION:Date & Time: Thursday 17th June 2021 6pm UK / 7pm Berlin / 10am LA / 1pm NYC \nIn this live stream we’ll talk with Dan Tepfer and hear more about his project Natural Machines. \nIn an age of unprecedented technological advancement\, Dan Tepfer is changing the definition of what a musical instrument can be. Featured in an NPR documentary viewed by 1.5 million people\, Dan Tepfer shows his pioneering skill in this concert by programming a Yamaha Disklavier to respond in real time to the music he improvises at the piano while another computer program turns the music into stunning animated visual art. Called “fascinating and ingenious” by Rolling Stone\, the Natural Machines performance lives at a deeply unique intersection of mechanical and organic processes\, making it “more than a solo piano album… a multimedia piece of contemporary art so well made in its process and components and expressed by such a thoughtful\, talented\, evocative pianist… that it becomes a complete experience” (NextBop). \nMusic Hackspace YouTube  \n﻿ \nOverview of speaker \nDan Tepfer is a French-American jazz pianist and composer. \nOne of his generation’s extraordinary talents\, Dan Tepfer has earned an international reputation as a pianist-composer of wide-ranging ambition\, individuality\, and drive—one “who refuses to set himself limits” (France’s Télérama). The New York City-based Tepfer\, born in 1982 in Paris to American parents\, has performed around the world with some of the leading lights in jazz and classical music\, and released ten albums of his own. \nTepfer earned global acclaim for his 2011 release Goldberg Variations / Variations\, a disc that sees him performing J.S. Bach’s masterpiece as well as improvising upon it—to “elegant\, thoughtful and thrilling” effect (New York magazine). Tepfer’s newest album\, Natural Machines\, stands as one of his most ingeniously forward-minded yet\, finding him exploring in real time the intersection between science and art\, coding and improvisation\, digital algorithms and the rhythms of the heart. The New York Times has called him “a deeply rational improviser drawn to the unknown.” \nTepfer’s honors include first prizes at the 2006 Montreux Jazz Festival Solo Piano Competition\, the 2006 East Coast Jazz Festival Competition\, and the 2007 American Pianists Association Jazz Piano Competition\, as well as fellowships from the American Academy of Arts and Letters (2014)\, the MacDowell Colony (2016)\, and the Fondation BNP-Paribas (2018).
URL:https://musichackspace.org/event/natural-machines-with-dan-tepfer-livestream/
LOCATION:Online
CATEGORIES:Free event,Live-stream,Video
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2021/06/Dan-Tepfer-livestream-draft-image.001-scaled.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20210409T180000
DTEND;TZID=Europe/London:20210409T200000
DTSTAMP:20260417T235807
CREATED:20210303T101719Z
LAST-MODIFIED:20210715T170328Z
UID:10000836-1617991200-1617998400@musichackspace.org
SUMMARY:Introduction to beat detection and audio-reactive visuals in TouchDesigner - On demand
DESCRIPTION:Level: Beginner \nTouchDesigner is a powerful tool for creating live performances\, installations\, real time visuals and complex digital systems. In this workshop you’ll learn the basic functioning of three node-types and how to use them to analyse audio\, use the data to manipulate graphics and how to organize and navigate your TouchDesigner network. \nSession Learning Outcomes \nBy the end of this session a successful student will be able to: \n\n\nInput audio into TouchDesigner \n\n\nExtract relevant data from input sources \n\n\nUse data to manipulate graphics \n\n\nCreate simple generative visuals \n\n\nNavigate the TouchDesigner network \n\n\nSession Study Topics \n\n\nAudio input sources \n\n\nBeat detection (frequency analysis\, timesclicing etc.) \n\n\nCreation and manipulation of generative visuals \n\n\nNetwork organisation \n\n\n\n \nRequirements \n\n\nA computer with internet connection \n\n\nA web cam and mic \n\n\nA three button mouse or to configure Apple Track Pad appropriately \n\n\nTouchDesigner (free version suffices https://derivative.ca/download) \n\n\nIf your on Mac please check TouchDesigner can run on your system (i.e. has basic GPU requirements such as Intel HD4000 or better) \n\n\nAbout the workshop leader  \nBileam Tschepe aka elekktronaut is a Berlin based artist and educator who creates audio-reactive\, interactive and organic digital artworks\, systems and installations in TouchDesigner\, collaborating with and teaching people worldwide.
URL:https://musichackspace.org/event/introduction-to-beat-detection-and-audio-reactive-visuals-in-touchdesigner-live-session/
LOCATION:Online
CATEGORIES:Music software,Software Classes,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2021/03/Bileam-recording-thumbnail.001-scaled.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20210209T180000
DTEND;TZID=Europe/London:20210209T200000
DTSTAMP:20260417T235807
CREATED:20210111T091621Z
LAST-MODIFIED:20210718T151129Z
UID:10000814-1612893600-1612900800@musichackspace.org
SUMMARY:Immersive AV Composition -On demand / 2 Sessions
DESCRIPTION:Level: Advanced \nThese workshops will introduce you to the ImmersAV toolkit. The toolkit brings together Csound and OpenGL shaders to provide a native C++ environment where you can create abstract audiovisual art. You will learn how to generate material and map parameters using ImmersAV’s Studio() class. You will also learn how to render your work on a SteamVR compatible headset using OpenVR. Your fully immersive creations will then become interactive using integrated machine learning through the rapidLib library. \nSession Learning Outcomes \nBy the end of this session a successful student will be able to: \n\n\nSetup and use the ImmersAV toolkit \n\n\nDiscuss techniques for rendering material on VR headsets \n\n\nImplement the Csound API within a C++ application \n\n\nCreate mixed raymarched and raster based graphics \n\n\nCreate an interactive visual scene using a single fragment shader \n\n\nGenerate the mandelbulb fractal \n\n\nGenerate procedural audio using Csound \n\n\nMap controller position and rotation to audiovisual parameters using machine learning \n\n\nSession Study Topics \n\n\nNative C++ development for VR \n\n\nVR rendering techniques \n\n\nCsound API integration \n\n\nReal-time graphics rendering techniques \n\n\nGLSL shaders \n\n\n3D fractals \n\n\nAudio synthesis \n\n\nMachine learning \n\n\n\n \nRequirements \n\n\nA computer and internet connection \n\n\nA web cam and mic \n\n\nA Zoom account \n\n\nCloned copy of the ImmersAV toolkit plus dependencies \n\n\nVR headset capable of connecting to SteamVR \n\n\n \nAbout the workshop leader  \nBryan Dunphy is an audiovisual composer\, musician and researcher interested in generative approaches to creating audiovisual art in performance and immersive contexts. His work explores the interaction of abstract visual shapes\, textures and synthesised sounds. He is interested in exploring strategies for creating\, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths\, University of London.
URL:https://musichackspace.org/event/immersive-av-composition-live-session-2-sessions/
LOCATION:Online
CATEGORIES:Software Classes,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2021/01/Bryan-Recording-Thumnail-Immersive-AV.001.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20210204T180000
DTEND;TZID=Europe/London:20210204T200000
DTSTAMP:20260417T235807
CREATED:20201230T132117Z
LAST-MODIFIED:20210720T094729Z
UID:10000795-1612461600-1612468800@musichackspace.org
SUMMARY:Video Synthesis with Vsynth for Max - LIVE Session
DESCRIPTION:Dates: Thursdays 4th / 11th / 18th / 25th February 2021 6pm GMT \nLevel: Intermediate + \nOverview \nIn this series of 4 workshops\, we’ll look at how to interconnect the different 80 modules that come with Vsynth\, exploring video techniques and practices that can create aesthetics associated with the history of the electronic image but also complex patterns founded in some basic functions of nature. \nVsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity made it the perfect tool to introduce yourself to video synthesis and image processing. Since It can be connected to other parts of Max\, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job. \nHere’s what you’ll learn in each workshop: \nWorkshop 1: \nLearn the fundamentals of digital video-synthesis by diving into the different video oscillators\, noise generators\, mixers\, colorizers and keyers. By the end of this session students will be able to build simple custom video-synth patches with presets. \n\nVideo oscillators\, mixers\, colorizers.\n\nWorkshop 2:  \n\n\nModulations (phase\, frequency\, pulse\, hue\, among others). \n\n\nIn this workshop we will focus on the concept of modulation so that students can add another level of complexity to their patches. We’ll see the differences between modulating parameters of an image with simple LFOs or with other images. Some of the modulations we’ll cover are Phase\, Frequency\, Pulse Width\, Brightness & HUE. \nWorkshop 3: \n\n\nFilters/convolutions and video feedback techniques. \n\n\nThis 3rd workshop is divided in two. In the first half\, we’ll go in depth in what actually means low or high frequencies in the image world. We’ll then use Low-pass and High-pass filters/convolutions in different scenarios to see how they affect different images. \n\n\nIn the second\, half we’ll go through a lot of different techniques that uses the process of video-feedback. From simple “trails” effects to more complex reaction-diffusion like patterns! \n\n\nWorkshop 4: \n\n\nWorking with scenes and external controllers (audio\, midi\, arduino). \n\n\nIn this final workshop we’ll see how to bundle in just one file several Vsynth patches/scenes with presets for live situations. We’ll also export a patch as a Max for Live device and go in depth into “external control” in order to successfully control Vsynth parameters with audio\, midi or even an Arduino. \n\n\n\n \nRequirements \n\n\nIntermediate knowledge of Max and Jitter \n\n\nHave latest Max 8 installed \n\n\nBasic knowledge of audio-synthesis and/or computer graphics would be useful \n\n\nAbout the workshop leader \nKevin Kripper (Buenos Aires\, 1991) is a visual artist and indie software developer. He’s worked on projects that link art\, technology\, education and toolmaking\, which have been exhibited and awarded in different art and science festivals. Since 2012 he’s been dedicated to creating digital tools that extend the creative possibilities of visual artists and musicians from all over the world.
URL:https://musichackspace.org/event/video-synthesis-with-vsynth-for-max-live-session/
LOCATION:Online
CATEGORIES:Live-stream,Max,Music software,Software Classes,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2020/12/Kevin-Kripper-Thumbnail.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20210202T180000
DTEND;TZID=Europe/London:20210202T200000
DTSTAMP:20260417T235807
CREATED:20210111T085822Z
LAST-MODIFIED:20210720T095200Z
UID:10000810-1612288800-1612296000@musichackspace.org
SUMMARY:Visual Music Performance with Machine Learning - On demand
DESCRIPTION:Level: Intermediate \nIn this workshop you will use openFrameworks to build a real-time audiovisual instrument. You will generate dynamic abstract visuals within openFrameworks and procedural audio using the ofxMaxim addon. You will then learn how to control the audiovisual material by mapping controller input to audio and visual parameters using the ofxRapid Lib add on. \nSession Learning Outcomes \nBy the end of this session a successful student will be able to: \n\n\nCreate generative visual art in openFrameworks \n\n\nCreate procedural audio in openFrameworks using ofxMaxim \n\n\nDiscuss interactive machine learning techniques \n\n\nUse a neural network to control audiovisual parameters simultaneously in real-time \n\n\nSession Study Topics \n\n\n3D primitives and perlin noise \n\n\nFM synthesis \n\n\nRegression analysis using multilayer perceptron neural networks \n\n\nReal-time controller integration \n\n\n\n \nRequirements \n\n\nA computer and internet connection \n\n\nA web cam and mic \n\n\nA Zoom account \n\n\nInstalled version of openFrameworks \n\n\nDownloaded addons ofxMaxim\, ofxRapidLib \n\n\nAccess to MIDI/OSC controller (optional – mouse/trackpad will also suffice) \n\n\n \nAbout the workshop leader  \nBryan Dunphy is an audiovisual composer\, musician and researcher interested in generative approaches to creating audiovisual art. His work explores the interaction of abstract visual shapes\, textures and synthesised sounds. He is interested in exploring strategies for creating\, mapping and controlling audiovisual material in real time. He has recently completed his PhD in Arts and Computational Technology at Goldsmiths\, University of London.
URL:https://musichackspace.org/event/visual-music-performance-with-machine-learning-live-session/
LOCATION:Online
CATEGORIES:Software Classes,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2021/01/Bryan-Thumbnail.001-1-scaled.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20210115T180000
DTEND;TZID=Europe/London:20210115T200000
DTSTAMP:20260417T235807
CREATED:20201209T145603Z
LAST-MODIFIED:20210204T130137Z
UID:10000910-1610733600-1610740800@musichackspace.org
SUMMARY:Jitter in Ableton - On demand
DESCRIPTION:Level: Intermediate \nCycling 74’s Jitter offers a vast playground of programming opportunities to create your own visual devices. In this workshop you will build your own visual device that utilizes a audio signals to manipulate imaginary. This workshop aims to provide you with suitable skills to begin exploring the jitter environment. \nSession Learning Outcomes \nBy the end of this session a successful student will be able to: \n\n\nDiscover the basics of the Jitter framework. \n\n\nExplore options for analysing audio signals to gather data to control visuals \n\n\nDeploy objects suitable for making visuals \n\n\nApply processes from data acquired from audio signals to control visuals. \n\n\nApply UI elements and save into a Max For Live device. \n\n\nSession Study Topics \n\nThe jitter landscape\nAudio analysis\nUsing visual objects\nControl visual objects\nUI & M4L devices\n\n\n\n\n\n \nRequirements \n\n\nA computer and internet connection \n\n\nA good working knowledge of computer systems \n\n\nAn basic awareness of audio processing \n\n\nGood familiarity with MSP \n\n\nAccess to a copy of Max 8 (i.e. trial or full license) or Live Suite (M4L) \n\n\nAbout the workshop leader \nNed Rush aka Duncan Wilson is a musician\, producer and performer. He’s most likely known best for his YouTube channel\, which features a rich and vast quantity of videos including tutorials\, software development\, visual art\, sound design\, internet comedy\, and of course music.
URL:https://musichackspace.org/event/jitter-in-ableton-live-session/
LOCATION:Online
CATEGORIES:Max,Music software,Video
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2020/12/Ned-Rush-Jan-Recording-thumbnails.002-scaled.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20210111T180000
DTEND;TZID=Europe/London:20210111T200000
DTSTAMP:20260417T235807
CREATED:20201209T112447Z
LAST-MODIFIED:20201209T112447Z
UID:10000906-1610388000-1610395200@musichackspace.org
SUMMARY:Interactive video with Jitter - LIVE Session
DESCRIPTION:Dates:  \nSession 1 – Monday 11th January 6pm – 8pm GMT \nSession 2 – Monday 18th January 6pm – 8pm GMT \nSession 3 – Monday 25th January 6pm – 8pm GMT \nSession 4 – Monday 1st February 6pm – 8pm GMT \nLevel: Beginner with a basic knowledge of Max \nWhat you will learn \nThe workshop is aimed at anyone who would like to learn how to create interactive visuals with the software Max/MSP/Jitter from Cycling74. \nYou definitely don’t need to be a Max guru to take part in the workshop\, although a basic knowledge of the program is required. \nWe will start from the very basics of working with videos and images in Max\, learning how to import footage and live camera stream and how those can be processed in the software. We will then see how to unlock the full control over visuals manipulation and analysis using the GEN environment\, which allows us to work on video and images at the pixel level. \nWe will finally proceed to introduce the OpenGL implementation inside Max\, with which we can create 3D graphics and visually satisfying post-processing effects. \nA major focus of the workshop will be to include interaction in our patches. \nWe will use audio streams to control and modify the parameter of our visuals\, as well as others interactive inputs like camera video stream. \nAt the end of the workshop we will have a showreel of the works created by the participants during the four weeks. \n\n \nRequirements \n\nA computer and internet connection\n\nTopics \nSession 1 \n\nWorkshop holder and participants introduction.\nShowreel on what can be achieved using Max/Jitter for the Visuals.\nStarting with video in Max: read and play a movie.\nOpen the webcam video-stream inside Max.\nIntroduction to the Jitter Matrix.\nCreate images by filling pixels algorithmically.\nCreate images using random and noise generators.\nExplanation on Perlin Noise.\nDrive video-effects using an audio stream.\n\nSession 2: \n\nIntroduction to OpenGL in Max.\nApply materials to 3D shapes.\nExplanation on light and color in GL in Max.\nControl 3D shapes parameters using an audio stream.\nIntroducing Textures.\nIntroduction to [jit.gen].\nReviewing simple trigonometry concepts.\nExplanation on Vectors.\nCreate large numbers of 3D shapes using [jit.gen] and [jit.gl.multiple].\n\nSession 3:   \n\nAnimate arrays of 3D shapes using [jit.gen] and input streams.\nCreate a simple particle system with [jit.gen].\nProcedurally create 2D/3D shapes using [jit.gen] and [jit.gl.mesh].  – Animate procedural shapes using an audio stream.\n Create procedural texture using [jit.gl.pix].\n\nSession 4:   \n\nDevelop audio-reactive visuals using the concepts seen in the previous sessions.\nCapture the 3D scene using [jit.gl.node].\nApply post-processing effects to the scene.\n Introduction to [jit.gl.pass].\n Performance consideration on visuals in Max/Jitter.\nConclusion.\n\nAbout the workshop leader \nFederico Foderaro is an audiovisual composer\, teacher and designer for interactive multimedia installations\, author of the YouTube channel Amazing Max Stuff.
URL:https://musichackspace.org/event/interactive-video-with-jitter-live-session/
LOCATION:Online
CATEGORIES:Max,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2020/12/Federico-Jitter-Jan-Series.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20201020T180000
DTEND;TZID=Europe/London:20201020T200000
DTSTAMP:20260417T235807
CREATED:20201008T144628Z
LAST-MODIFIED:20210720T111533Z
UID:10000760-1603216800-1603224000@musichackspace.org
SUMMARY:Learn to program amazing interactive particles systems with Jitter
DESCRIPTION:In this workshop\, you will learn to build incredible live videos with particles systems\, using Max and Jitter. \nCycling’74 has recently released GL3\, which ties together more closely Jitter with Open GL\, and optimises use of the GPU. With this recent update available in the package manager\, you can build highly performance videos without having to code them in C++. \n \nRequirements \n\nLatest version of Max 8 installed on Mac or Windows\nA good working knowledge of Max is expected\nUnderstanding of how the GEN environment works in Jitter\nSome familiarity with textual programming languages\nA knowledge of basic calculus is a bonus\nThe GL3 package installed\nTo install this package open the “Package Manager” from within Max\, look for the GL3 package and click “install”.\n\nWhat you will learn \nSession 1\, 20th October\, 6pm UK / 10am PDT / 1pm EST: \n– Introduction to GL3 features \n– Quick overview of most of the examples in the GL3 package \n– Build a simple particle system from scratch \n– Explorations with gravity/wind \n– Exploration with target attraction \nSession 2\, 27th October\, 6pm UK / 10am PDT / 1pm EST: \n– Improve particle system with rendering billboard shader \n– Creation of a “snow” or “falling leaves” like effect \n– Starting to introduce interactivity in the system \n– Using the camera input \n– Connecting sound to your patches \nSession 3\, 3rd November\, 6pm UK / 10am PDT / 1pm EST: \n– Improve the system interactivity \n– Particles emitting from object/person outline taken from camera \n– Create a particle system using 3D models and the instancing technique \n– Transforming an image or a video stream into particles \nSession 4\, 10th November\, 6pm UK / 10am PDT / 1pm EST: \n– Introduction to flocking behaviours and how to achieve them in GL3 \n– Create a 3D generative landscape and modify it using the techniques from previous sessions \n– Apply post-processing effects \n\nAbout the workshop leader: \nFederico Foderaro is an audiovisual composer\, teacher and designer for interactive multimedia installations\, author of the YouTube channel Amazing Max Stuff.\nGraduated in Electroacoustic Musical Composition at the Licinio Refice Conservatory in Frosinone cum laude\, he has lived and worked in Berlin since 2016. \nHis main interest is the creation of audiovisual works and fragments\, where the technical research is deeply linked with the artistic output.\nThe main tool used in his production is the software Max/MSP from Cycling74\, which allows for real-time programming and execution of both audio and video\, and represents a perfect mix between problem-solving and artistic expression. \nBeside his artistic work\, Federico teaches the software Max/MSP\, both online and in workshops in different venues. The creation of commercial audio-visual interactive installations is also a big part of his work life\, having led in the years to satisfactory collaborations and professional achievements.
URL:https://musichackspace.org/event/learn-to-program-amazing-interactive-particles-systems-with-jitter/
LOCATION:Online
CATEGORIES:Max,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2020/10/flyer.001-1-e1602163152543.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200915T180000
DTEND;TZID=Europe/London:20200915T200000
DTSTAMP:20260417T235807
CREATED:20200831T114822Z
LAST-MODIFIED:20210720T115043Z
UID:10000749-1600192800-1600200000@musichackspace.org
SUMMARY:Video synthesis with Vsynth workshop
DESCRIPTION:Level: Intermediate \nIn this series of 4 2-hours workshop\, Kevin Kripper\, the author of Vsynth\, explains  how to interconnect the different 80 modules that come with Vsynth\, exploring video techniques and practices that can create aesthetics associated with the history of the electronic image but also complex patterns founded in some basic functions of nature. \nHere’s what you’ll learn in each workshop: \nLesson 1: video oscillators\, mixers\, colorizers. \nLesson 2: modulations (pm\, fm\, pwm\, hue\, among others). \nLesson 3: filters/convolutions and video feedback techniques. \nLesson 4: working with presets\, scenes\, audio and midi. \nVsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity made it the perfect tool to introduce yourself to video synthesis and image processing. Since It can be connected to other parts of Max\, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job. \n\n \nRequirements \n\nBasic knowledge of Max and Jitter\nHave Max 8 installed\nFamiliarity with audio-synthesis or computer graphics would be useful.\n\nAbout the workshop leader \nKevin Kripper (Buenos Aires\, 1991) is a visual artist and indie software developer. He’s worked on several projects that link art\, technology\, education and toolmaking which has exhibited in festivals such as +CODE\, Innovar\, Wrong Biennale\, MUTEK\, among others. In 2016 he won first place at the Itaú Visual Arts Award with his work Deconstrucento. In addition\, since 2012 he’s been dedicated to create digital tools that extend the creative possibilities of visual artists and musicians from all over the world. During 2017\, he participated in the Toolmaker residency at Signal Culture (Owego\, NY) and in 2018 received a mention in the Technology applied to Art category from the ArCiTec Award for the development of Vsynth. \nhttps://www.instagram.com/vsynth74/ \nhttps://cycling74.com/articles/an-interview-with-kevin-kripper
URL:https://musichackspace.org/event/video-synthesis-with-vsynth-workshop/
LOCATION:Online
CATEGORIES:Max,Software Classes,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2020/08/Recorded-Workshop-thumbnails-14.12.2020.002.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200722T180000
DTEND;TZID=Europe/London:20200729T200000
DTSTAMP:20260417T235807
CREATED:20200623T085839Z
LAST-MODIFIED:20200730T144559Z
UID:10000737-1595440800-1596052800@musichackspace.org
SUMMARY:Video synthesis with Vsynth for Max\, with Kevin Kripper
DESCRIPTION:﻿ \nLevel: Intermediate \nIn this series of 4 workshops\, we’ll look at how to interconnect the different 80 modules that come with Vsynth\, exploring video techniques and practices that can create aesthetics associated with the history of the electronic image but also complex patterns founded in some basic functions of nature. \nHere’s what you’ll learn in each workshop: \n8th July: video oscillators\, mixers\, colorizers. \n15th July: modulations (pm\, fm\, pwm\, hue\, among others). \n22nd July: filters/convolutions and video feedback techniques. \n29th July: working with presets\, scenes\, audio and midi. \nBook tickets for the series of 4 at a special price or book each individually. \nVsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity made it the perfect tool to introduce yourself to video synthesis and image processing. Since It can be connected to other parts of Max\, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job.  \nRequirements \n\nBasic knowledge of Max and Jitter\nHave Max 8 installed\nFamiliarity with audio-synthesis or computer graphics would be useful.\n\nAbout the workshop leader \nKevin Kripper (Buenos Aires\, 1991) is a visual artist and indie software developer. He’s worked on several projects that link art\, technology\, education and toolmaking which has exhibited in festivals such as +CODE\, Innovar\, Wrong Biennale\, MUTEK\, among others. In 2016 he won first place at the Itaú Visual Arts Award with his work Deconstrucento. In addition\, since 2012 he’s been dedicated to create digital tools that extend the creative possibilities of visual artists and musicians from all over the world. During 2017\, he participated in the Toolmaker residency at Signal Culture (Owego\, NY) and in 2018 received a mention in the Technology applied to Art category from the ArCiTec Award for the development of Vsynth. \nhttps://www.instagram.com/vsynth74/ \nhttps://cycling74.com/articles/an-interview-with-kevin-kripper
URL:https://musichackspace.org/event/video-synthesis-with-vsynth-for-max-with-kevin-kripper/
LOCATION:Online
CATEGORIES:Max,Video,Workshops
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2020/06/workshop-flyer.001-e1592836867955.webp
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20200629T183000
DTEND;TZID=Europe/London:20200629T200000
DTSTAMP:20260417T235807
CREATED:20200622T145906Z
LAST-MODIFIED:20200629T161049Z
UID:10000837-1593455400-1593460800@musichackspace.org
SUMMARY:Kevin Kripper: Video synthesis with Vsynth & Max
DESCRIPTION:Vsynth is a high level package of modules for Max/Jitter that together make a modular video synthesizer. Its simplicity makes it the perfect tool to introduce video synthesis and image processing. Since It can be connected to other parts of Max\, other softwares and hardwares it can also become a really powerful and adaptable video tool for any kind of job. \nIn this live-stream\, Kevin will give an overview of Vsynth with examples and practical tips to get you started or go deeper into the creation of visuals. Kevin will also host a series of 4 workshops throughout July that you can book here. We highly recommend to watch the live-stream if you’re thinking of going to the workshop! \n﻿ \n  \nVsynth is among the most popular third-party module of Max\, with over 13\,000 downloads. \nKevin Kripper (Buenos Aires\, 1991) is a visual artist and indie software developer. He’s worked on several projects that link art\, technology\, education and toolmaking which has exhibited in festivals such as +CODE\, Innovar\, Wrong Biennale\, MUTEK\, among others. In 2016 he won first place at the Itaú Visual Arts Award with his work Deconstrucento. In addition\, since 2012 he’s been dedicated to create digital tools that extend the creative possibilities of visual artists and musicians from all over the world. During 2017\, he participated in the Toolmaker residency at Signal Culture (Owego\, NY) and in 2018 received a mention in the Technology applied to Art category from the ArCiTec Award for the development of Vsynth. \nhttps://www.instagram.com/vsynth74/ \nhttps://cycling74.com/articles/an-interview-with-kevin-kripper
URL:https://musichackspace.org/event/kevin-kripper-video-synthesis-with-vsynth-max/
LOCATION:YouTube and Facebook
CATEGORIES:Live-stream,Max,Video
ATTACH;FMTTYPE=image/webp:https://musichackspace.org/wp-content/uploads/2020/06/flyer.001-5-e1592839783534.webp
END:VEVENT
END:VCALENDAR