Move slow and create things

Dom Aversano

Over Christmas I took a week off, and no sooner had I begun to relax than an inspiring idea came to mind for a generative art piece for an album cover. The algorithm needed to make it was clear in my mind, but I did not want to take precious time away from family and friends to work on it. Then a thought occurred — could I build it quickly using ChatGPT?

I had previously resisted using Large Language Models (LLMs) in my projects for a variety of reasons. Would outsourcing coding gradually deskill me? Whose data was the system trained on and was I participating in their exploitation? Is the environmental effect of using such computationally intense technology justifiable?

Despite my reservations I decided to try it, treating it as an experiment that I could stop at any point. Shortly prior to this, I had read a thought-provoking online comment questioning whether manual coding might seem as peculiar and antiquated to the future as programming in binary does now. Could LLMs help make computers less rigid and fixed, opening up the world of programming to anyone?

While I had previously used ChatGPT to create some simple code for Supercollider, I had been unimpressed by the results. For this project, however, the quality of the code was different. Every prompt returned P5JS code that did exactly what I intended, without the need for clarification. I made precisely what I envisioned in less than 30 minutes. I was astonished. It was not the most advanced program, but neither was it basic.

Despite the success, I felt slightly uneasy. The computer scientist Grady Booch wrote that ‘every line of code represents an ethical and moral decision.’ It is tempting to lose sight of this amid a technological culture steeped in a philosophy of ‘move fast and break things’ and ‘it’s better to ask for forgiveness than permission’. So what specifically felt odd?

I arrived at what I wanted without much of a journey, learning little more than how to clarify my ideas to a machine. This is a stark contrast to the slow and meticulous manner of creation that gradually develops our skills and thinking, which is generally considered quintessential to artistic activity. Furthermore, although the arrival is quicker the destination is not exactly the same, since handcrafted code can offer a representation of a person’s worldview, whereas LLM code is standardised.

However, I am aware that historically many people — not least of all in the Arts and Crafts movement — expressed similar concerns, and one can argue that if machines dramatically reduce labourious work it could free up time for creativity. Removing the technical barrier to entry could allow many more people’s creative ideas to be realised. Yet efficiency is not synonymous with improvement, as anyone who has scanned a QR-code menu at a restaurant can attest.

The idea that LLMs could degrade code is plausible given that they frequently produce poor or unusable code. While they will surely improve, to what degree is unknown. A complicated project built from layers of machine-generated code may create layers of problems: short-term and long-term. Like pollution, its effects might not be obvious until they accumulate and compound over time. If LLMs are trained on LLM-generated code it could have a degradative effect, leading to a Model Collapse.

The ethics of this technology are equally complicated. The current lack of legislation around consent on training LLMs means many people are discovering that their books, music, or code has been used to train a model without their knowledge or permission. Beyond legislating, a promising idea has been proposed by programmer and composer Ed Newton-Rex, who has founded a company called Fairly Trained, which offers to monitor and certify different LLMs, providing transparency on how they were trained.

Finally, while it is hard to find accurate assessments of how much electricity these systems use, some experts predict they could soon consume as much electricity as entire countries, which should not be difficult to imagine given that the Bitcoin blockchain is estimated to consume more electricity than the whole of Argentina.

To return to Grady Booch’s idea that ‘every line of code represents an ethical and moral decision’ one could extend this to every interaction with a computer represents an ethical and moral decision. As the power of computers increases so should our responsibility, but given the rapid increases in computing power, it may be unrealistic to expect our responsibility to keep pace. Taking a step back to reflect does not make one a Luddite, and might be the most technically insightful thing to do. Only from a thoughtful perspective can we hope to understand the deep transformations occurring, and how to harness them to improve the world.

Max and Machine Learning with RunwayML – On-demand

Level: Intermediate

RunwayML is a platform that offers AI tools to artists without any coding experience. Max/MSP is a visual programming environment used in media art that can be used to control RunwayML in a more efficient way. At the end of the workshop you will be able to train trendy machine learning models and generate videos by walking a latent space through Max and NodeJS.

Session Learning Outcomes

By the end of the course a successful student will be able to:

  • Understand the RunwayML workflow

  • Use Node4Max to control RunwayML and generate a video.

  • Explore ML trendy models

  • Create a Dataset

  • Train a ML model

  • Process videos with the VIZZIE library.

Session 1

– Introduction to the course

– What’s machine learning, deep learning and neural networks?

– What’s RunwayML?

– What’s Max/MSP/Jitter and NodeJS?
– Dataset and models training with RunwayML

Session 2

– What’s a GAN and styleGAN?

– Latent space walk

– Image and video generation with RunwayML, Max and Node4Max (part 1)

Session 3

– Image and video generation with RunwayML, Max and Node4Max (part 2)

Session 4

– processing Images and videos with VIZZIE2 and Jitter.

Session Study Topics

  • Generate images and video through AI

  • Request data to models and save images on your local drive

  • Generate video from images

  • Communication protocols (web sockets and https requests)

  • AI models used in visual art.

  • Video processing

  • Models training

Requirements

  • A computer and internet connection

  • Access to a copy of Max 8 (either trial or licence)

  • A code editor such as Visual Studio Code, Sublime or Atom
  • Attendees need to create a RunwayML account –  https://app.runwayml.com/signup.
    • Upon setting up an account you will receive 10$ credit for free
    • Approx. 50$ credits will be required to complete the course however these do not need to purchased in advance
    • 20% RunwayML discount code will be provided to participant who sign up to the course 

About the workshop leader 

Marco Accardi is a trained musician, multimedia artist, developer and teacher based in Berlin.

He is the co-founder of Anecoica, a collective that organises events combining art, science and new technologies.