An interview with Blockhead creator Chris Penrose

Dom Aversano

A screenshot from Blockhead

Blockhead is an unusual sequencer with an unlikely beginning. In early 2020, as the pandemic struck, Chris Penrose was let go from his job in the graphics industry. After receiving a small settlement package, he combined this with his life savings and used it to develop a music sequencer that operated in a distinctively different manner from anything else available. In October 2023, three years after starting the project, he was working full-time on Blockhead, supporting the project through a Patreon page even though the software was still in alpha mode.

The sequencer has gained a cult following made up of fans as much as users, enthusiastic to approach music-making from a different angle. It is not hard to see why, as in Blockhead everything is easily malleable, interactive, and modulatable. The software works in a cascade-like manner, with automation, instruments, and effects at the top of the sequencer affecting those beneath them. These can be shifted, expanded, and contracted easily.

When I speak to Chris, I encounter someone honest and self-deprecating, all of which I imagine contributes to people’s trust in the project. After all, you don’t find many promotional videos that contain the line ‘Obviously, this is all bullshit’. There is something refreshingly DIY and brave about what he is doing, and I am curious to know more about what motivated him, so arranged to talk with Chris via Zoom to discuss what set him off on this path.

What led you to approach music sequencing from this angle? There must be some quite specific thinking behind it.

I always had this feeling that if you have a canvas and you’re painting, there’s an almost direct cognitive connection between whatever you intend in your mind for this piece of art and the actual actions that you’re performing. You can imagine a line going from the top right to the bottom left of the canvas and there is a connection between this action that you’re taking with a paintbrush pressing against the canvas, moving from top right down to left.

Do you think that your time in the graphics industry helped shape your thinking on music?

When it comes to taking the idea of painting on a canvas and bringing it into the digital world, I think programs like Photoshop have fared very well in maintaining that cognitive mapping between what’s going on in your mind and what’s happening in front of you in the user interface. It’s a pretty close mapping between what’s going on physically with painting on a canvas and what’s going on with the computer screen, keyboard and mouse.

How do you see this compared to audio software?

It doesn’t feel like anything similar is possible in the world of audio. With painting, you can represent the canvas with this two-dimensional grid of pixels that you’re manipulating. With audio, it’s more abstract, as it’s essentially a timeline from one point to another, and how that is represented on the screen never really maps with the mind. Blockhead is an attempt to get a little closer to the kind of cognitive mapping between computer and mind, which I don’t think has ever really existed in audio programs.

Do you think other people feel similarly to you? There’s a lot of enthusiasm for what you doing, which suggests you tapped into something that might have been felt by others.

I have a suspicion that people think about audio and sound in quite different ways. For many the way that digital audio software currently works is very close to the way that they think about sound, and that’s why it works so well for them. They would look at Blockhead and think, well, what’s the point? But I have a suspicion that there’s a whole other group of people who think about audio in a slightly different way and maybe don’t even realise as there has never been a piece of software that represents things this way.

What would you like to achieve with Blockhead? When would you consider it complete?

Part of the reason for Blockhead is completely selfish. I want to make music again but I don’t want to make electronic music because it pains me to use the existing software as I’ve lost patience with it. So I decided to make a piece of audio software that worked the way I wanted it. I don’t want to use Blockhead to make music right now because it’s not done and whenever I try to make music with Blockhead, I’m just like, no, this is not done. My brain fills with reasons why I need to be working on Blockhead rather than working with Blockhead. So the point of Blockhead is just for me to make music again.

Can you describe your approach to music?

The kind of music that I make tends to vary from the start. I rarely make music that is just layers of things. I like adding little moments in the middle of these pieces that are one-off moments. For instance, a half-second filter sweep in one part of the track. To do that in a traditional DAW, you need to add a filter plugin to the track. Then that filter plugin exists for the entire duration of the track, even if you’re just using it for one moment. It’s silly that it has to exist in bypass mode or 0% wet for the entire track, except in this little part where I want it. The same is true of synthesizers. Sometimes I want to write just one note from a synthesizer at one point in time in the track.

Is it possible for you to complete the software yourself?

At the current rate, it’s literally never going to be finished. The original goal with Patreon was to make enough money to pay rent and food. Now I’m in an awkward position where I’m no longer worrying about paying rent, but it’s nowhere near the point of hiring a second developer. So I guess my second goal with funding would be to make enough money to hire a second person. I think one extra developer on the project would make a huge difference.

It is hard not to admire what Chris is doing. It is a giant project, and to have reached the stage that it has with only one person working on it is impressive. Whether the project continues to grow, and whether he can hire other people remains to be seen, but it is a testament to the importance of imagination in software design. What is perhaps most attractive of all, is how it is one person’s clear and undiluted vision of what this software should be, which has resonated with so many people across the world.

If you would like to find out more about the Blockhead or support the project you can visit its Patreon Page.

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.

Getting Started with Max – July series

Dates & Times: Wednesdays 7th / 14th / 21st / 28th July 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

Level: Beginners curious about programming

Get started with interactive audio and MIDI, and discover the possibilities of the Max environment. In this series of recorded videos, you will learn how to manipulate audio, MIDI, virtual instruments and program your own interactive canvas.

Connect together Max’s building blocks to create unexpected results, and use them in your music productions. Through a series of exercises you will engage in the pragmatic creation of a basic MIDI sequencer device that features a wealth of musical manipulation options.

Learn from guided examples.

This on demand content aims to enable you to work with Max confidently on your own.

Learning outcomes: 

  • Understand the Max environment

  • Connect building blocks together and work with data

  • Master the user interface

  • Work with your MIDI instruments

Requirements

  • A computer and internet connection

  • A good working knowledge of computer systems

  • Access to a copy of Max 8

Digging deeper into Flora for monome norns – On-demand

Level: Some experience of norns required

In the second Flora workshop, you will build your own custom L-system algorithms using Flora’s UI and sequence the script using the preset (PSET) sequencer. The script’s community gardening feature will also be covered to give you a new way to share your sequences with a worldwide audience.

Flora on Vimeo

By the end of the second workshop, you will be able to:

  • Design custom L-system algorithms with the Flora UI and maiden

  • Share your custom L-system algorithms using Flora’s community gardening feature

  • Sequence presets using Flora and other norns scripts

  • Use Flora as a sequencer for your external synthesizer(s) using crow and/or midi

Session study topics

  • L-system algorithm properties

  • Sharing L-system algorithms

  • Meta-sequencing Flora and other norns scripts

  • Norns integration with midi and/or crow

 

Requirements

  • A computer and internet connection

  • A norns device with Flora installed

  • Optional: A midi-enabled controller and/or synthesizer

 

We have a number of sponsorship places available, if the registration fee is a barrier to you joining the workshop please contact laura@stagingmhs.local.

 

 

About the workshop leader

Jonathan Snyder is a Portland, Oregon based sound explorer and educator.

Previously, he worked for 22 years as a design technologist, IT manager, and educator at Columbia University’s Media Center for Art History, Method, and Adobe.

Going further with cheat codes 2: A sample playground for norns – On demand

Level: Beginner

Cheat codes 2 is a sample playground built for monome norns to explore live and pre-recorded audio. It extends traditional slicing and looping workflows to create playful music-making experiences with exciting results. This workshop will help uncover extended techniques for working with the script, including external controllers, creating dynamic soundscapes with the delay, recording patterns, remixing with the arpeggiator, and incorporating randomization to keep your sessions fresh.

Session Learning Outcomes

By the end of this session a student will be able to successfully:

  • Create full sample-based compositions with three parts
  • Employ methods for quickly generating exciting clock-synced or asynchronous improvisations
  • Explore multiple sampling paradigms to find new musical territories

Session Study Topics

    • Structuring compositions with both clock-synced and asynchronous looping/sampling methods
    • Creating a base from which to improvise
    • Incorporating external controllers to extend your workflow

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a norns or norns shield running software 201202 or later

  • Familiarity with the basic process of connecting your norns to WiFi

  • Familiarity with the basic functions of cheat codes 2

  • Access to speakers or an audio interface to share sounds

  • Access to any of the following controllers: monome grid, monome arc, USB MIDI keyboard or sequencer (eg. KeyStep or OP-Z), a recent Launchpad (X, Pro, or Mini mk3), USB MIDI slider bank (eg. 16n), MIDI Fighter Twister, or Max for Live

About the workshop leader

Dan Derks is a creative technologist and improviser with a passion for community. He is the host of Sound + Process, a podcast about the artists of lines (https://llllllll.co), and he builds digital tools for monome norns and Max for Live.

Getting started with cheat codes 2: A sample playground for monome norns – On demand

Level: Beginner

Cheat codes 2 is a sample playground built for monome norns to explore live and pre-recorded audio. It extends traditional slicing and looping workflows to create playful music-making experiences with exciting results. This workshop will introduce you to the script including recording audio and loading clips, building loops and slices, making adjustments to timbre and tonality, and exploring different methods for triggering and playback (with or without any external controllers).

Session Learning Outcomes

By the end of this session a student will be able to successfully:

  • Navigate the cheat codes interface with ease and purpose
  • Employ and extend looping paradigms
  • Deconstruct and manipulate audio recordings
  • Create a compelling playground for further exploration

Session Study Topics

    • The anatomy of cheat codes
    • Loading samples and recording live audio into norns
    • Slicing audio into pads as recallable loops or one-shots
    • Making timbral and tonal changes to each pad

Requirements

  • A computer and internet connection

  • A web cam and mic

  • A Zoom account

  • Access to a norns or norns shield running software 201202 or later

  • Familiarity with the basic process of connecting your norns to WiFi

  • Access to speakers or an audio interface to share sounds

  • External controllers will not be required or used for most of the workshop. For the last section, recommended controllers include: monome grid, USB MIDI keyboard or sequencer (eg. KeyStep or OP-Z), a recent Launchpad (X, Pro, or Mini mk3), or MIDI Fighter Twister

About the workshop leader

Dan Derks is a creative technologist and improviser with a passion for community. He is the host of Sound + Process, a podcast about the artists of lines (https://llllllll.co), and he builds digital tools for monome norns and Max for Live.

About
Privacy