Book Review: Supercollider for the Creative Musician

Dom Aversano

Supercollider for the creative musician.

Several years ago a professor of electronic music at a London University advised me not to learn Supercollider as it was ‘too much of a headache’ and it would be better just to learn Max. I nevertheless took a weekend course, but not long after my enthusiasm for the language petered out. I did not have the time to devote to learning and was put off by Supercollider’s patchy documentation. It felt like a programming language for experienced programmers more than an approachable tool for musicians and composers. So instead I learned Pure Data, working with that until I reached a point where my ideas diverged from anything that resembled patching cords, at which point, I knew I needed to give Supercollider a second chance.

A lot had changed in the ensuing years, and not least of all with the emergence of Eli Fieldsteel’s excellent YouTube tutorials. Eli did for SuperCollider what Daniel Shiffman did for Processing/P5JS by making the language accessible and approachable to someone with no previous programming experience. Just read the comments for Eli’s videos and you’ll find glowing praise for their clarity and organisation. This might not come as a complete surprise as he is an associate professor of composition at the University of Illinois. In addition to his teaching abilities, Eli’s sound design and composition skills are right up there. His tutorial example code involves usable sounds, rather than simply abstract archetypes of various synthesis and sampling techniques. When I heard Eli was publishing a book I was excited to experience his teaching practice through a new medium, and curious to know how he would approach this.

The title of the book ‘SuperCollider for the Creative Musician: A Practical Guide’ does not give a great deal away, and is somewhat tautological. The book is divided into three sections: Fundamentals, Creative Techniques, and Large-Scale Projects.

The Fundamentals section is the best-written introduction to the language yet. The language is broken down into its elements and explained with clarity and precision making it perfectly suited for a beginner, or as a refresher for people who might not have used the language in a while. In a sense, this section represents the concise manual Supercollider has always lacked. For programmers with more experience, it might clarify the basics but not represent any real challenge or introduce new ideas.

The second section, Creative Techniques, is more advanced. Familiar topics like synthesis, sampling, and sequencing, are covered, as well as more neglected topics such as GUI design. There are plenty of diagrams, code examples, and helpful tips that anyone would benefit from to improve their sound design and programming skills. The code is clear, readable, and well-crafted, in a manner that encourages a structured and interactive form of learning and makes for a good reference book. At this point, the book could have dissembled into vagueness and structural incoherence, but it holds together sharply.

The final section, Large-Scale Projects, is the most esoteric and advanced. Its focus is project designs that are event-based, state-based, or live-coded. Here Eli steps into a more philosophical and compositional terrain, showcasing the possibilities that coding environments offer, such as non-linear and generative composition. This short and dense section covers the topics well, providing insights into Eli’s idiosyncratic approach to coding and composition.

Overall, it is an excellent book that every Supercollider should own. It is clearer and more focused than The Supercollider Book, which with multiple authours is fascinating, but makes it less suitable for a beginner. Eli’s book makes the language feel friendlier and more approachable. The ideal would be to own both, but given a choice, I would recommend Eli’s as the best standard introduction.

My one criticism — if it is a criticism at all — is that I was hoping for something more personal to the authour’s style and composing practice, whereas this is perhaps closer to a learning guide or highly-sophisticated manual. Given the aforementioned lack of this in the Supercollider community Eli has done the right thing to opt to plug this hole. However, I hope that this represents the first book in a series in which he delves deeper into Supercollider and his unique approach to composition and sound design.

 

 

Eli Fieldsteel - authour of Supercollider for the Creative Musician
Eli Fieldsteel / authour of Supercollider for the Creative Musician

Click here to order a copy of Supercollider for the Creative Musician: A Practical Guide

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at the Liner Notes.

Can AI help us make humane and imaginative music?

Dom Aversano

There is a spectrum upon which AI music software exists. On one end are programs which create entire compositions, and on the other are programs that help people create music. In this post I will focus on the latter part of the spectrum, and ask the question, can AI help us compose and produce music in humane and imaginative ways? I will explore this question through a few different AI music tools.

Tone Transfer / Google

For decades the dominance of keyboard interaction has constrained computer music. Keyboards elegantly arrange a large number of notes but limit the control of musical parameters beyond volume and duration. Furthermore, with the idiosyncratic arrangement of a keyboard’s notes, it is hard to work — or even think — outside of the 12-note chromatic scale. Even with the welcome addition of pitch modulation wheels and microtonal pressure-sensitive keyboards such as Roli’s fascinating Seaboard, keyboards still struggle to express the nuanced pitch and amplitude modulations quintessential to many musical cultures.

For this reason, Magenta’s Tone Transfer may represent a potentially revolutionary change in computer music interaction. It allows you to take a sound or melody from one instrument and transform it into a completely different-sounding instrument while preserving the subtleties and nuances of the original performance. A cello melody can be transformed into a trumpet melody, the sound of birdsong into fluttering flute sounds, or a sung melody converted into a number of traditional concert instruments. It feels like the antidote to autotune, a tool that captures the nuance, subtly, and humanity of the voice, while offering the potential to transform it into something quite different.

In practice, the technology falls short of its ambitions. I sang in a melody and transformed it into a flute sound, and while my singing ability is unlikely to threaten the reputation of Ella FitzGerald, the flute melody that emerged sounded like the flautist was drunk. However, given the pace at which machine learning is progressing, one can expect it to be much more sophisticated in the coming years, and I essentially regard this technology as an early prototype.

Google has admirably made the code open source and the musicians who helped train the machine learning algorithms are prominently credited for their work. You can listen to audio snippets of the machine learning process, and hear the instrument evolve in complexity after 1 hour, 3 hours, and 10 hours of learning.

It is not just Google developing this type of technology — groups like Harmonai and Neutone doing similar things and any one of them stands to transform computer music interaction, by anchoring us back into the most universal instrument, the human voice.

Mastering / LANDR

Although understanding how mastering works is relatively straightforward, understanding how a mastering engineer perceives music and uses their technology is far from simple since there is as much art as there is science to their craft. Therefore, is this a process that can be devolved to AI?

That is the assumption behind LANDR’s online mastering service which allows you to upload a finished track for mastering. Once it is processed, you are given the option to choose from three style settings (Warm, Balanced, Open) and three levels of loudness (Low, Medium, High), with a master/original toggle to compare the changes made.

I uploaded a recent composition to test it. The result was an improvement on the unmastered track, but the limited options to modify it gave the feeling of a one-size-fits-all approach, inadequate for those who intend to carefully shape their musical creations at every stage of production. However, this might not be an issue for people on lower-budget projects, or those who intend to simply and quickly improve their tracks for quick release.

In a desire to understand the AI technology I searched for more precise details, and while the company says that ‘AI isn’t just a buzzword for us’ I could only find a quote that does little to describe how the technology actually works.

Our legendary, patented mastering algorithm thoroughly analyzes tracks and customizes the processing to create results that sound incredible on any speaker.

While LANDR’s tool is useful for quick and cheap mastering, it feels constrained and artistically unrewarding if you want something more specific. The interface also feels like it limits the potential of the technology. Why not allow text prompts such as: “cut the low-end rumble, brighten the high end, and apply some subtle vintage reverb and limiting”.

Fastverb / Focusrite

Unlike mastering, reverb is an effect rather than a general skill or profession, making it potentially simpler to devolve aspects of it to AI. Focusrite’s Fastverb reverb effect uses AI to analyse your audio before prescribing certain settings for you based on this, which you can then go on to tweak. The company is vague about how their AI technology works, simply stating.

FAST Verb’s AI is trained on over half a million real samples, so you’ll never need to use presets again.

I use the plugin on a recent composition. The results were subtle but an improvement. I adjusted some of the settings and it sounded better. Overall, I had the impression of a tasteful reverb that would work with many styles of music.

Did the AI help significantly in arriving at the desired effect? It is hard to say. I would assume for someone with very limited experience using such tools, yes, but without someone confident with an effect, I doubt it saves much time at all.

I am aware however there is the potential for snobbery here. After all, if a podcaster can add a decent reverb to their show or a guitarist can add some presence to their recording easily, that’s no bad thing. They can if they want go on to learn more about these effects and fine-tune them themselves. For this reason purpose, it represents a useful tool.

Overview

LANDR’s Mastering service and Focusrite’s Fastverb are professional tools that I hope readers of this article will be tempted to try. However, while there is clearly automation at work, how the AI technology works is unclear. If the term AI is used to market tools, there should be clarification of what exactly it is — otherwise one might as well just write ‘digital magic’. By contrast, Google’s Tone Transfer have made their code open source, as well as describing in detail how they use machine learning, and the people involved in training the models.

I expect that the tools that attempt to speed up or improve existing processes, such as mastering and applying reverb, will have the effect of lowering the barrier to entry into audio engineering, but I have yet to see evidence it will improve it. In fact, it could degrade and homogenise audio engineering by encouraging people to work faster but with less skill and care.

By contrast, the machine learning algorithms that Googe, Harmonai, Neutone, and others are working on, could create meaningful change. They are not mature technologies, but there is the seed of something profound in them. The ability to completely transform the sounds of music while preserving the performance and the potential to bring the voice to the forefront of computer music could prove to be genuinely revolutionary.

Creative Riff Composition with MIDI – On-demand

Level: Beginner

The riff by nature is repetitive so you get it, again and again, you get it reinforced and the rest of the song is built around it like the riff was the skeleton of the song.

This workshop aims to provide you with the necessary abilities to begin composing riffs and arranging a composition around such an important musical element.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Apply critical listening skills to riff & recurring motifs.

  • Extrapolate core musical qualities of a riff.

  • Construct a riff within a selected musical genre.

  • Apply arrangement techniques around a riff within a track.

Session Study Topics

  • MIDI programming

  • Rhythmic subdivision and polymeter

  • Micro fills and macro fills

  • Layering and subtractive arrangement techniques

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a copy of Live Suite or Standard (i.e. trial or full license)

About the workshop leader: 

Simone Tanda is a musician, producer, multi-media artist, tech consultant, and educator.

Based across London & Berlin he is currently creating music for his own project, as well as multidisciplinary artists, film, and commercials.

An introduction to Flora for monome norns – On-demand

Level: Some experience of norns required

Flora is an L-systems sequencer and bandpass-filtered sawtooth engine for monome norns. In this workshop you will learn how L-system algorithms are used to produce musical sequences while exploring the script’s UI and features.

Flora on Vimeo

By the end of the first workshop, you will be able to:

  • Navigate the Flora UI and parameters menus to build and perform your own compositions

  • Create dynamically shaped, multinodal envelopes to modulate Flora’s bandpass-filtered sawtooth engine

  • Build generative polyrhythms and delays into your compositions

  • Use crow and/or midi-enabled controllers and synthesizers to play Flora

Session study topics:

  • Sequencing with L-system algorithms

  • Physical modeling synthesis with bandpass filters

  • Generate multi-nodal envelope

  • Norns integration with midi and/or crow

 

Requirements

  • A computer and internet connection

  • A norns device with Flora installed

  • Optional: A midi-enabled controller and/or synthesizer

 

We have a number of sponsorship places available, if the registration fee is a barrier to you joining the workshop please contact laura@stagingmhs.local.

 

About the workshop leader 

Jonathan Snyder is a Portland, Oregon based sound explorer and educator.

Previously, he worked for 22 years as a design technologist, IT manager, and educator at Columbia University’s Media Center for Art History, Method, and Adobe.

Creative MIDI CC’s in Ableton Live – On-demand

If you’d like to support the Music Hackspace to continue to build a program of free workshops, a voluntary contribution would be much appreciated. 

Level: Intermediate

Ableton Live offers a vast playground of musical opportunities to create musical compositions and productions. These include techniques to deploy MIDI Control Change messages (CC’s) to manipulate and transform musical ideas. In this workshop you will creatively explore and deploy a range of MIDI CC’s manipulation tools in a musical setting. This workshop aims to provide you with suitable skills to utilise the creative possibilities of MIDI CC manipulation in the Ableton Live environment.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Identify MIDI CC’s messages

  • Map MIDI CC’s to parameters

  • Manipulate Clip Envelopes and dummy Clips via MIDI CC’s

  • Utilise MIDI CC”s to create novel musical and sonic elements

Session Study Topics

  • MIDI CC messages

  • MIDI Mapping CC’s

  • Clip Envelopes and dummy clips and CC’s

  • Creatively using MIDI CC’s

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a copy of Live Suite (i.e. trial or full license)

About the workshop leader 

Anna is a London based producer, engineer, vocalist and educator.

Anna is currently working as a university lecturer in London, teaching music production, creating educational content and working on her next releases as ANNA DISCLAIM.

Creative Audio and MIDI in Ableton Live – On-demand

If you’d like to support the Music Hackspace to continue to build a program of free workshops, a voluntary contribution would be much appreciated. 

Level: Intermediate

Ableton Live offers a vast playground of musical opportunities to create musical compositions and productions. These include converting audio based harmony, melody and rhythm to MIDI, alongside techniques such as slicing audio into sampling tools which can be triggered via MIDI. In this workshop you will creatively explore and deploy a range of Audio and MIDI manipulation tools in a musical setting. This workshop aims to provide you with suitable skills to utilise the creative possibilities of Audio and MIDI manipulation in the Ableton Live environment.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Convert Audio to MIDI

  • Slice Audio to MIDI

  • Manipulate Audio via MIDI slices

  • Utilise Audio and MIDI to create novel musical and sonic elements

Session Study Topics

  • Converting Audio to MIDI

  • Slicing Audio to MIDI

  • Manipulating slices within Simpler

  • Creatively using Audio and MIDI

Requirements

  • A computer and internet connection

  • Access to a copy of Live Suite (i.e. trial or full license)

About the workshop leader 

Anna is a London based producer, engineer, vocalist and educator.

Anna is currently working as a university lecturer in London, teaching music production, creating educational content and working on her next releases as ANNA DISCLAIM.

Discover the new features in Max for Live 11 – On demand

Level: Intermediate

MaxforLive allows users to develop their own devices for use in composition, performance and beyond. In the recent release of Live Suite 11 there are a myriad of new features and tools for musicians and programmers alike. In this workshop you will explore these new tools and features and be able to leverage them in your own musical works and patches.

By the end of this session a successful student will be able to:

  • Explore new MPE possibilities

  • Utilise the new devices

  • Identify the new integrations and objects

  • Understand and deploy the new features for developers

Session Study Topics

  • MPE and Max for Live

  • New Max for Live devices

  • New integrations and objects in Max for Live with Live 11

  • New features for developers of Max for Live devices with Live 11

Requirements

  • A computer and internet connection

  • Access to a copy of Live 11 Suite & Max for Live (i.e. trial or full license)

About the workshop leader

Mark Towers is an Ableton Certified Trainer and a lecturer in music technology at Leicester College. He specialises in Max for Live, as well as working with Isotonik Studios to create unique and creative devices for music production and performance such as the Arcade Series.

 

Supported by

 

Generative Music Tools: LFOs and Pitch Quantization – On demand

Level: Intermediate

There are a broad array of techniques musicians can use to generate music in Max. One fundamental component of traditional analogue synthesiser use is the LFO, or low-frequency oscillator. Additionally, pitch quantization can be an extremely powerful tool, especially when used alongside the values generated by an LFO.

This workshop will provide you with the information to construct both devices in Max, giving you a broader palette of compositional tools.

Session Learning Outcomes:

By the end of this session you will be able to:

  • Learn the basics of LFOs and pitch quantizers.

  • Build a standalone LFO patch with variable waveforms and a functional UI.

  • Build a quantizer which will map incoming pitch values to user-defined scales/modes.

  • Use both devices to control parameters of sound synthesis and assist in generative music composition.

Session Study Topics

  • Generative music

  • LFOs and waveforms

  • Pitch quantization

  • Composition through MIDI and software instrument manipulation.

Requirements

  • A computer and internet connection
  •  Access to a copy of Max 7 or 8 (i.e. trial or full license)

About the workshop leader 

Samuel Pearce-Davies is a composer, performer, music programmer and Max hacker living in Cornwall, UK.

With a classical music background, it was his introduction to Max/MSP during undergraduate studies at Falmouth University that sparked Sam’s passion for music programming and algorithmic composition.

Going on to complete a Research Masters in computer music, Sam is now studying a PhD at Plymouth University in music-focused AI

Generative Music Tools: Turing Machine – LIVE Session

Level: Intermediate

There are a broad array of techniques musicians can use to generate music in Max. One such process involves taking inspiration from Alan Turing’s early work on proto-computers, in particular the notion of a tape with data being displayed on it.

This workshop will provide you with the information to construct such a generative device, a ‘Turing Machine’, to supplement your compositional practice.

Session Learning Outcomes:

By the end of this session you will be able to:

  • Understand the fundamentals of a Turing Machine in a musical context.

  • Patch together a generative process using randomisation and counters.

  • Build a functional UI to tweak different aspects of the generative process in real time.

  • Use the finished device to both generate music through MIDI and control broader parameters of software instruments.

Session Study Topics

  • Turing machines, generative music.

  • Random processes: drunken walks and probability.

  • Visual design in Max

  • Composition through MIDI and software instrument manipulation.

Requirements

  • A computer and internet connection
  • Access to a copy of Max 7 or 8 (i.e. trial or full license)

About the workshop leader 

Samuel Pearce-Davies is a composer, performer, music programmer and Max hacker living in Cornwall, UK.

With a classical music background, it was his introduction to Max/MSP during undergraduate studies at Falmouth University that sparked Sam’s passion for music programming and algorithmic composition.

Going on to complete a Research Masters in computer music, Sam is now studying a PhD at Plymouth University in music-focused AI.

Getting confident with Max – On demand

Level: Beginner

Cycling 74’s Max / MSP offers a vast playground of programming opportunities to create your own sound design and multimedia applications. In this workshop you will build a patch using items from the Max tool bar such as Beap and Vizzie as well using media from your own collection, plus explore ways to open up, reverse engineer and modify existing resources within the Max.

Series Learning Outcomes

By the end of this series a successful student will be able to:

  • Confidently navigate the Max environment to quickly gain access to content and learning resources.

  • Deploy resources into a patch.

  • Connect and explore these resources to develop ideas for sound and media design, composition and performance.

  • Navigate the help file system and reverse engineer existing content in the Max application.

Session Study Topics

  • The Tools available in a Max such as Beap and Vizzie modules.

  • Playlists and drag and drop media.

  • Bpatches, prototypes and snippets.

  • The helpfile system.

Requirements

  • A computer and internet connection

  • Access to a copy of Max 8 (i.e. trial or full license)

About the workshop leader

Duncan Wilson (aka Ned Rush) is a musician, producer and content creator based in the UK. Whilst perhaps largely known for his Youtube channel, he has also released music independently as well developing content for Isotonik Studios.