An interview with Blockhead creator Chris Penrose

Dom Aversano

A screenshot from Blockhead

Blockhead is an unusual sequencer with an unlikely beginning. In early 2020, as the pandemic struck, Chris Penrose was let go from his job in the graphics industry. After receiving a small settlement package, he combined this with his life savings and used it to develop a music sequencer that operated in a distinctively different manner from anything else available. In October 2023, three years after starting the project, he was working full-time on Blockhead, supporting the project through a Patreon page even though the software was still in alpha mode.

The sequencer has gained a cult following made up of fans as much as users, enthusiastic to approach music-making from a different angle. It is not hard to see why, as in Blockhead everything is easily malleable, interactive, and modulatable. The software works in a cascade-like manner, with automation, instruments, and effects at the top of the sequencer affecting those beneath them. These can be shifted, expanded, and contracted easily.

When I speak to Chris, I encounter someone honest and self-deprecating, all of which I imagine contributes to people’s trust in the project. After all, you don’t find many promotional videos that contain the line ‘Obviously, this is all bullshit’. There is something refreshingly DIY and brave about what he is doing, and I am curious to know more about what motivated him, so arranged to talk with Chris via Zoom to discuss what set him off on this path.

What led you to approach music sequencing from this angle? There must be some quite specific thinking behind it.

I always had this feeling that if you have a canvas and you’re painting, there’s an almost direct cognitive connection between whatever you intend in your mind for this piece of art and the actual actions that you’re performing. You can imagine a line going from the top right to the bottom left of the canvas and there is a connection between this action that you’re taking with a paintbrush pressing against the canvas, moving from top right down to left.

Do you think that your time in the graphics industry helped shape your thinking on music?

When it comes to taking the idea of painting on a canvas and bringing it into the digital world, I think programs like Photoshop have fared very well in maintaining that cognitive mapping between what’s going on in your mind and what’s happening in front of you in the user interface. It’s a pretty close mapping between what’s going on physically with painting on a canvas and what’s going on with the computer screen, keyboard and mouse.

How do you see this compared to audio software?

It doesn’t feel like anything similar is possible in the world of audio. With painting, you can represent the canvas with this two-dimensional grid of pixels that you’re manipulating. With audio, it’s more abstract, as it’s essentially a timeline from one point to another, and how that is represented on the screen never really maps with the mind. Blockhead is an attempt to get a little closer to the kind of cognitive mapping between computer and mind, which I don’t think has ever really existed in audio programs.

Do you think other people feel similarly to you? There’s a lot of enthusiasm for what you doing, which suggests you tapped into something that might have been felt by others.

I have a suspicion that people think about audio and sound in quite different ways. For many the way that digital audio software currently works is very close to the way that they think about sound, and that’s why it works so well for them. They would look at Blockhead and think, well, what’s the point? But I have a suspicion that there’s a whole other group of people who think about audio in a slightly different way and maybe don’t even realise as there has never been a piece of software that represents things this way.

What would you like to achieve with Blockhead? When would you consider it complete?

Part of the reason for Blockhead is completely selfish. I want to make music again but I don’t want to make electronic music because it pains me to use the existing software as I’ve lost patience with it. So I decided to make a piece of audio software that worked the way I wanted it. I don’t want to use Blockhead to make music right now because it’s not done and whenever I try to make music with Blockhead, I’m just like, no, this is not done. My brain fills with reasons why I need to be working on Blockhead rather than working with Blockhead. So the point of Blockhead is just for me to make music again.

Can you describe your approach to music?

The kind of music that I make tends to vary from the start. I rarely make music that is just layers of things. I like adding little moments in the middle of these pieces that are one-off moments. For instance, a half-second filter sweep in one part of the track. To do that in a traditional DAW, you need to add a filter plugin to the track. Then that filter plugin exists for the entire duration of the track, even if you’re just using it for one moment. It’s silly that it has to exist in bypass mode or 0% wet for the entire track, except in this little part where I want it. The same is true of synthesizers. Sometimes I want to write just one note from a synthesizer at one point in time in the track.

Is it possible for you to complete the software yourself?

At the current rate, it’s literally never going to be finished. The original goal with Patreon was to make enough money to pay rent and food. Now I’m in an awkward position where I’m no longer worrying about paying rent, but it’s nowhere near the point of hiring a second developer. So I guess my second goal with funding would be to make enough money to hire a second person. I think one extra developer on the project would make a huge difference.

It is hard not to admire what Chris is doing. It is a giant project, and to have reached the stage that it has with only one person working on it is impressive. Whether the project continues to grow, and whether he can hire other people remains to be seen, but it is a testament to the importance of imagination in software design. What is perhaps most attractive of all, is how it is one person’s clear and undiluted vision of what this software should be, which has resonated with so many people across the world.

If you would like to find out more about the Blockhead or support the project you can visit its Patreon Page.

Dom Aversano is a British-American composer, percussionist, and writer. You can discover more of his work at Liner Notes.

How to design a music installation – an interview with Tim Murray-Browne (part 1)

Dom Aversano

How to design a music installation - an interview with Tim Murray-Browne (part 1)

I met artist and coder Tim Murray-Browne just over a decade ago, briefly after he was made artist in residence for Music Hackspace. Tall, thin, with a deep yet softly-spoken voice, he stood up and gave a presentation to an audience of programmers, academics, musicians, and builders, in a room buzzing with anticipation. The setting was a dingy studio in Hoxton, East London, prior to the full-on gentrification of the artistic neighbourhood.

Tim’s idea for a project was bold: He had no idea. Or to be more precise, his idea was to have no idea. Instead, the idea would emerge from a group. There were quizzical looks in the audience and questions to confirm indeed the idea was to have no idea. For an artistically audacious idea, this was a good audience, comprised as it was of open-minded, radical, and burningly curious people. By the meeting’s end an unspoken consensus of ‘let’s give this a go’ seemed to have quietly been reached.

Tim’s faith in his concept was ultimately vindicated since the installation that emerged from this process, Cave of Sounds, still tours to this day. Created by a core group of eight people — myself one of them — it has managed to stay relevant amid a slew of socio-political and technological changes. As an artist, Tim has continued to make installations, many focusing on dance, movement, and the human body, as well as more recently, AI.

I wanted to reflect back on this last decade, to see what had been learned, what had changed, what the future might hold, and above all else, how one goes about creating an installation.

What do you think are the most important things to consider when building an interactive installation?

First, you need some kind of development over time. I used to say narrative though I’m not sure if that is the right word anymore, but something needs to emerge within that musical experience. A pattern or structure that grows. Let’s say someone arrives by themselves, maybe alone in a room, and is confronted with something physical, material, or technological, and the journey to discover what patterns emerge has begun. Even though an installation is not considered a narrative form, any interaction is always temporal.

Second, has to do with agency. It’s very tempting as an artist to create a work and have figured out exactly what experience you want your audience to have and to think that that’s going to be an interactive experience even though you’ve already decided it. Then you spend all your time locking down everything that could happen in the space to make sure the experience you envisaged happens. I think if you do this you may as well have made a non-interactive artwork, as I believe the power of interactivity in art lies in the receiver having agency over what unfolds.

Therefore, I think the question of agency in music is fundamental. When we are in the audience watching music a lot of what we get out of it is witnessing someone express themselves skillfully. Take virtuosity, that comes down to witnessing someone have agency in a space and really do something with it.

How exactly do you think about agency in relation to installations?

In an interactive installation, it’s important to consider the agency of the person coming in. You want to ask, how much freedom are we going to give this person? How broad is the span of possible outcomes? If we’re doing something with rhythm and step sequencing are we going to quantise those rhythms so everything sounds like a techno track? Or are we going to rely on the person’s own sense of rhythm and allow them to decide whether to make it sound like a techno track or not?

It all comes down to the question of what is the point of it being interactive. While it is important to have some things be controllable, a lot of the pleasure and fun of interactive stuff is allowing for the unexpected, and therefore I find the best approach when building an installation is to get it in front of unknown people as soon as possible. Being open to the unexpected does not mean you cannot fail. An important reason for getting a work in front of fresh people is to understand how far they are getting into the work. If they don’t understand how to affect and influence the work then they don’t have any agency, and there won’t be any sense of emergence.

Can you describe music in your childhood? You say you sang in choirs from the age of six to twelve. What was your experience of that?

At the time it burnt me out a little but I’m very thankful for it today. It was very much tied to an institution. It was very institutional music and it was obligatory. I was singing in two to three masses a week and learning piano and percussion. I stopped when I was about 13. I had a few changes in life, we moved country for a little bit and I went to a totally different kind of school and environment. It wasn’t until a few years later that I picked up the piano again, and only really in the last couple of years have I reconnected with my voice.

Your PhD seemed to be a turning point for you and a point of re-entry into music. Can you describe your PhD, and how that influenced your life?

I began doing a PhD looking at generative music, and as I was trying to figure out what the PhD would be I had an opportunity to do a sound installation in these underground vaults in London Bridge Station with a random bunch of people in my research group. They were doing an installation there and someone had some proximity sensors I could use. There was an artist who had some projections which were going up and I made a generative soundscape for it. Being in the space and seeing the impact of that work in a spatial context really shifted my focus. I felt quite strongly that I wanted to make installations rather than just music, and I reoriented my PhD to figure out how to make it about that. I was also confronted with the gulf of expectation and reality in interactive art. I thought the interactivity was too obvious if anything, but then as I sat and watched people enter the space, most did not even realise the piece was interactive.

How do these questions sit with you today?

From an academic perspective, it was a really terrible idea because a PhD is supposed to be quite focused, and I was questioning how can you make interactive music more captivating. I had this sense in my head of what an interactive music experience could be, and it was as immersive, durational and gripping as a musical experience. Nearly every interactive sound work I was finding ended up being quite a brief experience – you kind of just work out all the things you can do and then you’re done.

I saw this pattern in my own work too. My experience in making interactive sound works was much more limited back then, but I saw a common pattern of taking processes from recorded music and making it interactive. My approach was to ask ‘Well what is music really? why do we like it?’ and all kinds of answers come up about emerging structures, belonging, and self-expression, so then the question was how can we create interactive works that embody those qualities within the interactivity itself.

What it left me with was not such a clear pathway into academia, because I hadn’t arrived at some clear and completed research finding, but what I had done was immersed myself so fundamentally in trying to answer this question, how can I make captivating interactive music experiences?f

What did you find?

On the question of interaction with technology, I think the most fundamental quality of technology is interaction, human-computer interaction. How is it affecting us? How are we affecting it? How does that ongoing relationship develop?

There is so much within those questions, and yet interactivity is often just tacked on to an existing artwork or introduced in a conventional way because that is how things are done. In fact, the way you do interactivity says a lot about who you are and how you see the world. How you design interaction is similar to how you make music, there are many ways, and each has a political interpretation that can be valuable in different contexts.

Who has influenced you in this respect?

The biggest influence on me at the point where I’d finished my PhD and commenced Cave of Sounds was the book Musicking by Christopher Small.

The shift in mindset goes from thinking that music is something being done by musicians on a stage and being received by everyone else around them, to being a collective act that everybody’s participating in together, and that if there weren’t an audience there to receive it the musician couldn’t be participating in the same music.

What I found informative is to take a relativist view on different musical cultures. Whether it is a rock concert, classical concert, folk session, or jazz jam, you can think of them as being different forms of this same thing, just with different parameters of where the agency is.

For instance, if you’re jamming with friends in a circle around a table there is space for improvisation and for everybody to create sound. This has an egalitarian nature to it. Whereas with an orchestra there is little scope for the musicians to choose what notes they play, but a huge scope for them to demonstrate technical virtuosity and skill, and I don’t think there’s anything wrong with that. I love orchestral music. I think there is beauty to the coordination and power. I can see how it could be abused politically, but it’s still a thing that I feel in my body when I experience it, and I want to be able to access that feeling.

What I’m most suspicious about are stadium-level concerts. The idolisation of one individual on a stage with everyone in the crowd going emotionally out of control. It is kind of this demagogue/mob relationship. People talk about these Trump rallies as if they’re like rock concerts, and it’s that kind of relationship that is abused politically.

Cave of Sounds was created by Tim Murray-Browne, Dom Aversano, Sus Garcia, Wallace Hobbes, Daniel Lopez, Tadeo Sendon, Panagiotis Tigas, and Kacper Ziemianin with support from Music Hackspace, Sound and Music, Esmée Fairbairne Foundation, Arts Council England and British Council.

You can read more of this interview in Part 2 which will follow shortly, where we discuss the future of music as well as practical advice for building installations. To find out more about Tim Murray-Browne you can visit his website or follow him on SubstackInstagramMastodon, or X.

Livestream: Nestup – A Language for Musical Rhythms

Date & Time: Monday 10th May 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

In this livestreamed interview, we will speak with Sam Tarakajian and Alex Van Gils, who’ve built a fantastic live-coding environment that works within an Ableton Live device called Nestup

The programs we use to make music have a lot of implicit decisions baked into them, especially in their graphical interfaces. Nestup began as a thought experiment, trying to see if embedding a text editor inside Live could open up new creative possibilities. We think the answer is that yes, text can work well alongside a piano roll and a traditional musical score, as a concise and expressive way to define complex rhythms.

With Nestup, you define for yourself any size of rhythmic unit, any sort of rhythmic subdivision, and with any scaling factor. These language features open your rhythm programming up to musical ideas such as metric modulation, nested tuplets, complex polyrhythm, and more. Rhythms from musical styles which would have been prohibitively difficult to program in a DAW can therefore be rendered in MIDI, such as rhythms from Armenian folk musics or “new complexity” compositions.

Overview of speakers

Sam is a Brooklyn based developer and creative coder. Sam works for Cycling ‘74 and develops independent projects at Cutelab NYC. Alex is a composer, performer, and generative video artist based in Brooklyn. 

Sam and Alex have been making art with music and code together for over 10 years, beginning with a composition for double bass and Nintendo Wiimote while undergraduates and continuing to include electroacoustic compositions, live AR performance art, installation art, Max4Live devices, and now Nestup, the domain-specific language for musical rhythms.

Where to watch?

YouTube –

 

Livestream: TidalCycles – growing a language for algorithmic pattern

Thursday 20th May 6pm UK / 7pm Berlin / 10am LA / 1pm NYC

In this livestreamed interview, Alex McLean retraces the history and intent that prompted him to develop TidalCycles alongside ‘Algorave’ live performance events, contributing to establish Live Coding as an art discipline.

 Alex started TidalCycles project for exploring musical patterns in 2009, and it is now a healthy free/open-source software project and among the most well-known live coding environments for music.

TidalCycles represents musical patterns as a function of time, making them easy to make, combine and transform. It is generally partnered with the SuperDirt hybrid synthesiser/sampler, created by Julian Rohrhuber using SuperCollider. 

Culturally, TidalCycles is tightly linked to Algorave, a movement created by Alex McLean and Nick Collins in 2011, where musicians and VJs make algorithms to dance to.

Where to watch – 

 

Facebook –  https://www.facebook.com/musichackspace/

Overview of speaker

Alex McLean is a musician and researcher based in Sheffield UK. As well as working on TidalCycles, he also researches algorithmic patterns in ancient weaving, as part of the PENELOPE project based in Deutsches Museum, Munich. He has organised hundreds of events in the digital arts, including the annual AlgoMech festival of Algorithmic and Mechanical Movement. Alex co-founded the international conferences on live coding and live interfaces, and co-edited the Oxford Handbook of Algorithmic Music. As live coder has performed worldwide, including Sonar, No Bounds, Ars Electronica, Bluedot and Glastonbury festivals.

Interview: Chagall

Chagall is a London-based, Dutch electronic music producer, songwriter and vocalist who has been using the gestural mi.mu gloves interface since early 2015. Her performances are a physical manifestation of her electronic music productions, using the movement of her body to directly render the music live to audiences. Having recently completed a short residency with Music Hackspace, we caught up to discuss her residency, productions, inspirations and her recent showcase at London’s Rich Mix. 

 

So to start off, do you want to explain what you’ve been doing as part of your residency with Music Hackspace?

During my Music Hackspace residency I worked on the development of my new live show called ‘Calibration’. The performance incorporates songwriting, electronic music, mi.mu gloves, reactive visuals, choreography & lights so I worked with a team of artists from various fields to bring all these elements together. My motivation for this project was that I had spent about a year and a half programming the mi.mu gloves as my main controller for playing my tunes and by the time I felt like I figured everything out technically, I was really homesick to the core of my artistic expression: music! So during the R&D we really put the meaning of the songs centre stage again while using the technology as well as all the other elements to augment them. It’s been really exciting and I think we have very successfully created a new way of performing electronic music in which technology, sound, songwriting, visual art & movement come together in fluently.

 

In June we came to your Calibration Showcase at Rich Mix in East London. Could you explain the premise of the night? How do you think it went? To me it seemed really successful, especially given it was the night of an unexpected general election.

The Rich Mix night was really to show ‘Calibration’ to audiences in my hometown for the first time and to present the result of the R&D. I think it went really well, I was on a pink fluffy cloud all the way through it. I hadn’t expected so many people to show up, especially on election night. In hindsight I think people enjoyed the distraction while waiting for the result and that tension made for a very special vibe in the room. I felt like people were pretty emotional. But maybe that was just me… And my parents haha! They were sobbing all the way through it on the front row.

How did you get involved with mi.mu Gloves? And what’s your role working with them as an organisation?

I met team members Kelly Snook and Adam Stark in 2014 when they did a mi.mu gloves workshop as part of Reverb Festival at the Roundhouse. I was so impressed with how the technology had progressed since I saw them on the internet, so I wrote to them offering to help out with anything if they needed a “hand”. Lucky for me they did! So I assisted in the very first production run of gloves and then stuck around. I now do UX on the software and do a lot of the project management together with Adam.

 

Something that stood out on the night was the flexibility and mobility of mi.mu Gloves, which you just wouldn’t get with other instruments or controllers. Is that the case? And is that the main benefit for you?

Yes, you’re right. The mi.mu gloves system allows you to create your own personal sound-gesture relationships, that can vary per song and even per part of a song. So the same movements can have different controls at different times, and equally you can control the same effects with different movements. Sometimes people ask me if that isn’t very confusing and how do I remember how to move, but that’s the beauty of it: if it’s hard to remember, then you can just change the mappings in a way that suits you better. It’s absolutely brilliant and in fact it has made performing electronic music the most personal way I’ve ever played music before, because of those sound-gesture relationships that are unique to me.

Another thing that stood out was how many pop bangers you’re written. Can you name a few key influences?

Haha! Thank you. I do really believe that avantgarde pop music is highly underestimated. Sometimes it seems like you can only use technology to perform abstract & dance music, but hopefully new and more performative controllers like the mi.mu gloves will persuade more pop musicians to get into tech too. Influences? There’s so many… I guess Björk is a massive one, not only her music but her whole career and way of thinking about art. But my influences are quite broad really, from newer stuff like James Blake, Tirzah, Son Lux, but also Nina Simone, Joni Mitchell, Radiohead etc… I also really love African music and listen to a lot of classical too.

 

What’ve you got planned for the rest of the year? Any plans for an EP/Album?

I am definitely going to be releasing something this year and I’m going on tour in October/November. The next few months I want to really spend some more time writing & producing new stuff too for a super exciting (but still secret) project in 2018! Stay tuned…

 

For more from Chagall, check out her website, Facebook and Twitter.

About
Privacy