What do recent trends in generative art mean for music?

Dom Aversano

Manu #34 by artist Rich Poole

In his provocative and fascinating book, Noise, the French musicologist and economist Jacques Attali wrote the following about the prophetic power of music.

Music is prophecy. Its styles and economic organization are ahead of the rest of society because it explores, much faster than material reality can, the entire range of possibilities in a given code.

For a long time I considered this flattering statement about music to be true, but the more I learned about visual arts the more I saw that at various points in history it seemed to push ahead of music. Decades before Brian Eno used the term Generative Music the term Generative Art was being used, which does not mean there were no generative processes in music before then — there certainly were — but the terminology helped articulate a theoretical framework through which the art could be understood and developed.

In the last few years a big shift occurred in visual generative arts, somewhat obscured by the huge attention given to advances in machine learning and large language models, but worthy of examination for anyone interested in digital arts.

This innovation was fuelled by NFTs or non-fungible tokens (you can read more about them here). Putting aside the controversial ethical and technological aspects of cryptocurrency and NFTs — which I hope to cover in a future post — the economy it produced provided many generative artists with a living, during which the technical aspects of the art grew more sophisticated and publications like Right Click Save emerged to document the movement. This year the NFT economy fundamentally collapsed, making for an opportune moment to review what happened during its boom, and its relevance to musicians and composers.

In 2021 the generative artist and writer Tyler Hobbs wrote an important essay called The Rise of Long-Form Generative Art, which helps make sense of the recent changes to generative art. Within it, he describes two broad categories of generative art: short-form and long-form.

Generative art has traditionally favoured short-form, which he describes as follows.

First, there was almost always a “curation” step. The artist could generate as many outputs as they pleased and then filter those down to a small set of favorites. Only this curated set of output would be presented to the public.

The result of this is often small collections ranging from a single image to about a dozen. The artist is still largely in control, creating art in a manner that does not radically deviate from tradition.

In a jargon-dense paragraph Hobbs describes long-form art, with the last sentence being especially significant.

The artist creates a generative script (e.g. Fidenza) that is written to the Ethereum blockchain, making it permanent, immutable, and verifiable. Next, the artist specifies how many iterations will be available to be minted by the script. A typical choice is in the 500 to 1000 range. When a collector mints an iteration (i.e. they make a purchase), the script is run to generate a new output, and that output is wrapped in an NFT and transferred directly to the collector. Nobody, including the collector, the platform, or the artist, knows precisely what will be generated when the script is run, so the full range of outputs is a surprise to everyone.

This constitutes a fundamental change. The artist no longer directly creates the art but an algorithm to create art, renouncing control over what the algorithm produces from the moment it is published. It is a significant shift in the relationship between the artist, the artwork, and the audience, that calls into question the definition of art.

Manu #216 by artist Rich Poole

Within the same essay, Hobbs describes a concept for analysing long-form art that he calls “output space”.

Fundamentally, with long-form, collectors and viewers become much more familiar with the “output space” of the program. In other words, they have a clear idea of exactly what the program is capable of generating, and how likely it is to generate one output versus another. This was not the case with short-form works, where the output space was either very narrow (sometimes singular) or cherry-picked for the best highlights.

This concept of an algorithm’s spectrum of variation is valuable. After all, scale without meaningful variation is decorated repetition. Paradoxically — in a superficial sense at least — algorithms can simultaneously have infinite permutations and a great sense of predictability and monotony. The notion of output space is perhaps a more accurate way to evaluate generative works than their literal number of iterations or other quantifiable measures.

In reflecting on how the concept of long-form might exist in music, two works sprung to mind.

The first is Jem Finer’s Longplayer. The composition was created with the intention of being played for a millennium and is currently installed at Trinity Buoy Wharf in East London. For almost a year I worked part-time at the Longplayer and had the opportunity to listen to the installation for hours on end. It struck me as a novel and ambitious idea with an attractive sound, but I was not able to detect any noticeable variation or development from one hour, week, or month to the next. To use the language of generative art, its output space felt narrow — at least over a duration that is short in comparison to its intended length.

I should point out this might well miss the point of the composition, designed as it is to make one reflect on vast time scales and to invite intergenerational collaboration.

The second example is Brian Eno’s composition Reflections, released both as an app and a series of musical excerpts. Eno describes it using the metaphor of a river.

It’s always the same river, but it’s always changing.

Having discovered this piece relatively recently I have not listened sufficiently to have an opinion, although there are many glowing online reviews about its ability to transform and change mood, and people listening to it extensively.

The requirement of extensive listening highlights an important difference between music and visual art. It is much quicker to scan over a collection of 1,000 images than to spend, hours, weeks, or even months attentively listening to an algorithm unfold, which helps explain why long-form generative art is currently more popular than long-form generative music, though there may be another reason too.

You might ask why has long-form generative art become so popular recently as it is by no means a new concept. In 1949 the abstract artist Joseph Albers began a 25-year project working on an iconic and influential series called A Homage to Squares, comprising over 100 paintings that combine squares of different sizes and colours in a variety of ways. By contrast, you now have artists developing algorithms in a couple of months to create ten times more images than Albers’s series. Is this meaningful art, or a hi-tech example of the philosophical more is more?

While it might be cynical to reduce an art movement to a single economic factor, it would also be naive, to ignore it. A significant number of people were made wealthy in a very short time by the boom of NFTs, and the supply and demand relationship was transformed as digital art can be produced with dramatically less time and cost than traditional art. Huge demand could be met with huge supply with little more effort than adding a couple of zeros to the number of iterations.

The rates that certain pieces sold for at the height of the hype are astonishing. A single image in a collection of Cellular Automaton sold for 1,000,000 Tezos (£537,000). I do not know whether this was motivated by some murky financial practice or credulity on the part of the collector, but to have a single work in a collection of 1,000 — composed from an 80-year-old mathematical concept — selling for such a huge price indicates that money significantly shapes the culture. Despite the rot, some art that emerged from this movement is genuinely inspiring and thought-provoking.

Take Dreaming of Le Corbusier, by the Norwegian artist Andreas Rau. It is an impressive algorithm that generates a new ‘architectural’ abstract artwork each time you click on it. Some works have the appearance of having been designed deliberately, with the consistent quality of the compositions being remarkable.

There is also the work of Rich Poole which is featured in this piece. The series feels musical in its composition — reminiscent of a beautiful music sequencer, where colour, height, and length correspond to some musical parameters. The owners of the NFTs choose from iterations of an algorithm what work they would like, meaning the series is ‘collector-curated’.

What happens to generative art now that NFTs have collapsed? That is anyone’s guess. It is hard to envision the sudden emergence of an economy remotely comparable to the over-hyped NFT market. Yet there has been a shift and a new potential glanced at, not just by the artists involved, but by all of us.

The artworks featured in this article are shared with kind permission by the artist Rich Poole. You can view his entire series for Manu here

Getting started with Interactive Machine Learning for openFrameworks – On-demand

Level: Intermediate – C++ required

Using openFrameworks, ofxRapidLib and ofxMaximilian, participants will learn how to integrate machine learning into generative applications. You will learn about the interactive machine learning workflow and how to implement classification, regression and gestural recognition algorithms.

You will  explore a static classification approach that employs the k-Nearest Neighbour (KNN) algorithm to categorise data into discrete classes. This will be followed by an exploration of static regression problems that will use multilayer perceptron neural networks to perform feed-forward, non-linear regression on a continuous data source. You will also explore an approach to temporal classification using dynamic time warping which allows you to analyse and process gestural input

This knowledge will allow you to build your own complex interactive artworks.

By the end of this series the participant will be able to:

Overall:

  • Set up an openFrameworks project for machine learning

  • Describe the interactive machine learning workflow

  • Identify the appropriate contexts in which to implement different algorithms

  • Build interactive applications based on classification, regression and gestural recognition algorithms

Session 1:

  • Set up an openFrameworks project for classification

  • Collect and label data

  • Use the data to control audio output

  • Observe output and evaluate model

Session 2:

  • Set up an openFrameworks project for regression

  • Collect data and train a neural network

  • Use the neural network output to control audio parameters

  • Adjust inputs to refine the output behaviour

Session 3:

  • Set up an openFrameworks project for series classification

  • Design gestures as control data

  • Use classification of gestures to control audio output

  • Refine gestural input to attain desired output

Session 4:

  • Explore methods for increasing complexity

  • Integrate visuals for multimodal output

  • Build mapping layers

  • Use models in parallel and series

Session Study Topics

Session 1:

  • Supervised Static Classification

  • Data Collection and Labelling

  • Classification Implementation

  • Model Evaluation

Session 2:

  • Supervised Static Regression

  • Data Collection and Training

  • Regression Implementation

  • Model Evaluation

Session 3:

  • Supervised Series Classification

  • Gestural Recognition

  • Dynamic Time Warp Implementation

  • Model Evaluation

Session 4:

  • Data Sources

  • Multimodal Integration

  • Mapping Techniques

  • Model Systems

Requirements

  • A computer with internet connection

  • Installed versions of the following software:

    • openFrameworks

    • ofxRapidLib

    • ofxMaxim

  • Preferred IDE (eg. XCode / Visual Studio)

About the workshop leader 

Bryan Dunphy is an audiovisual composer, musician and researcher interested in using machine learning to create audiovisual art. His work explores the interaction of abstract visual shapes, textures and synthesised sounds. He is interested in exploring strategies for creating, mapping and controlling audiovisual material in real time. He is close to completion of his PhD in Arts and Computational Technology at Goldsmiths, University of London.

An introduction to Flora for monome norns – On-demand

Level: Some experience of norns required

Flora is an L-systems sequencer and bandpass-filtered sawtooth engine for monome norns. In this workshop you will learn how L-system algorithms are used to produce musical sequences while exploring the script’s UI and features.

Flora on Vimeo

By the end of the first workshop, you will be able to:

  • Navigate the Flora UI and parameters menus to build and perform your own compositions

  • Create dynamically shaped, multinodal envelopes to modulate Flora’s bandpass-filtered sawtooth engine

  • Build generative polyrhythms and delays into your compositions

  • Use crow and/or midi-enabled controllers and synthesizers to play Flora

Session study topics:

  • Sequencing with L-system algorithms

  • Physical modeling synthesis with bandpass filters

  • Generate multi-nodal envelope

  • Norns integration with midi and/or crow

 

Requirements

  • A computer and internet connection

  • A norns device with Flora installed

  • Optional: A midi-enabled controller and/or synthesizer

 

We have a number of sponsorship places available, if the registration fee is a barrier to you joining the workshop please contact laura@stagingmhs.local.

 

About the workshop leader 

Jonathan Snyder is a Portland, Oregon based sound explorer and educator.

Previously, he worked for 22 years as a design technologist, IT manager, and educator at Columbia University’s Media Center for Art History, Method, and Adobe.

TouchDesigner meetup 17th April – Audio visualisation

Date & Time: Saturday 17th April 5pm – 7pm UK / 6pm – 8pm Berlin

Level: Open to all levels

Join the online meetup for expert talks on audio visualisation. Meet and be inspired by the TouchDesigner community.

The meetup runs via Zoom. The main session features short presentations from TouchDesigner users. Breakout rooms are created on the spot on specific topics, and you can request a new topic at any time.

The theme for this session is Audio visualisation, hosted by Bileam Tschepe with presentations from the community.

In the breakout rooms, you can share your screen to show other participants something you’re working on, ask for help, or help someone else.

Presenters:

Name: Ian MacLachlan
Title: Terraforming with MIDI
Bio: Bjarne Jensen is an experimental audio/visual artist from the Detroit area with an interest in creating interactive systems for spatial transformation.
Name: Jean-François Renaud
Title: Generating MIDI messages to synchronize sound and visual effect in TouchDesigner
Description : Instead of using the audio analysis strategy to affect the rendering, we are focusing on building small generative machines using the basic properties of notes (pitch, velocity), and we look at different means to manage triggering. At the end, the goal is still to merge and to make alive what you hear and what you see.
Bio: Interactive media professor at École des médias, UQAM, Montréal
Vimeohttps://vimeo.com/morpholux 
Name: Bileam Tschepe
Title: algorhythm – a first look into my software
Description: I’ve been working on a tool for audiovisual live performances and I’d like to share its current state and see if people are interested in collaborating and working with me
Bio: Berlin based artist and educator who creates audio-reactive, interactive and organic digital artworks, systems and installations in TouchDesigner, collaborating with and teaching people worldwide.
YouTube: Bileam Tschepe

Requirements

  • A Zoom account
  • A computer and internet connection

Berlin Code of Conduct

We ask all participants to read and follow the Berlin Code of Conduct and contribute to creating a welcoming environment for everyone.

Supported by

Understanding Indian rhythm through simple algorithms – On demand

Level: All Max users

South Indian Carnatic music is home to a huge array of fascinating rhythms, composed from algorithms. Rooted in maths and aesthetics, Carnatic music has many facets that can be applied to computer music. In this workshop you will be given an introduction to this tradition, and provided with the opportunity to observe, create, and hack various patches that demonstrate some of these ideas.

Session Learning Outcomes

By the end of this session a successful student will be able to:

  • Be capable of reciting a simple rhythmic konnakol phrase

  • Be capable of conceiving simple rhythmic algorithms

  • Be capable of translating these concepts into simple Max patches

  • Understand South Indian rhythmic concepts & terminology such as Tala, Jhati, and Nadai

Session Study Topics

  • Learning a konnakol phrase

  • Understanding Tala cycles

  • Understanding Jhati and Nadai

  • Translating rhythmic algorithms into code

Requirements

  • A computer and internet connection

  • A webcam and mic

  • A Zoom account

  • Access to a copy of Max 8 (i.e. trial or full license)

About the workshop leader

Dom Aversano is a Valencian and London based composer and percussionist with a particular interest in combining ideas from the South Indian classical and Western music traditions. He has performed internationally as a percussionist, and produced award-winning installation work that has been exhibited in Canada, Italy, Greece, Australia, and the UK.

For a decade Dom has studied South Indian Carnatic music in London and in Chennai. He has studied with mridangam virtuoso Sri Balachandar, the resident percussionist of The Bhavan music centre in London, as well as shorter periods with Somashekar Jois and M N Hariharan.

About
Privacy