Introduction to SuperCollider
SuperCollider is an interpreted music environment developed by James McCartney in 1996. It was open-sourced in 2002 when McCartney joined Apple. Since then, SuperCollider has been adopted by a growing community of musicians-developers.
In April 2012, Queen Mary University joined forces with Goldsmiths and City Universities to organise a London symposium. The event featured artistic installations, seminars, talks, concerts and even a dubstep remix competition, with a coverage from the BBC.
Shelly Knotts and Les Hutchins gave us an overview of their work with SuperCollider. They share musical data through the network and perform with the Birmingham Laptop Ensemble (BiLE). This is one of the very interesting advantage of SuperCollider, its client/server architecture facilitates collaboration and remote jamming. Musicians send each other data and create music locally instead of sending audio signal to the network. SuperCollider is so lightweight that Dan Stowell et al. made an album using 140-characters Tweets in form of SuperCollider instructions (sc140)!
A very interesting night indeed, with pizza and Martin’s warm Stout.
Here is what a SuperCollider environment looks like:
“A Hackday” by Pauline de Zeeuw
Pauline filmed this short documentary at the last Music Hackday in London, December 2011.This documentary reflects well the atmosphere of Music Hackdays as well as that of the Hackspace. Well done to her!
A Hackday from Pauline de Zeeuw on Vimeo.
Recycled Tunes: making hits out of Rubbish
On Thursday 22nd of March, Martin Malii-Karlsson will present “Recycled Tunes: making hits out of Rubbish”
To promote sustainability and recycling to young people using music and social media.
We want to build a sound/lyrics/visual bank/radio of recycled material open for people to share and mash up.
Plus pull in data from ex twitter about peoples thoughts about sustainability.
http://recycledtunes.org/?page_id=4
Short fundraising video
Longer Pilot video
PatchWerk Radio: Generative Internet Radio
On Thursday the 15th of March, Guy John will talk and answer questions about PatchWerk Radio.
PatchWerk Radio is a generative music internet radio station that streams constantly changing audio twenty four hours a day, seven days a week. It is built with Pure Data at its heart and wrapped in Python to help make things a bit more manageable. The project lives at http://patchwerk.rumblesan.com and all the code, including the patches themselves can be found at the github repository.
The server has a bank of patches from which it will choose a new one randomly every 10 minutes at which point the old patch will cross fade into the new and the stream carries on, uninterrupted.
The talk will aim to cover the internal architecture of the radio station, some thoughts on Pure Data as a server side process, a brief introduction into generative music and hopefully the basics of creating patches to run on PatchWerk.
din is noise: a Free software musical instrument
On Thursday 1st of March, Jagannathan will present din is noise: a Free software musical instrument.
From Jag’s website:
“If Puredata and Supercollider are two synths,
din is a synth of a 3rd kind.
It forgets history,
To not repeat it.
It doesnt hide analog music hardware,
In digital music software.”
Augmented Piano: Andrew McPherson
Andrew McPherson (Queen Mary, University of London) will present his work extending and enhancing the piano keyboard. The presentation will include a live demo of a multi-touch capacitive sensor system to detect the location of fingers on the key surfaces. The system can be installed on any acoustic or electronic keyboard, giving the player multiple dimensions of continuous control over each note.
The evening will also include a short video demo of the magnetic resonator piano (MRP), an electronically-augmented acoustic grand piano that uses electromagnets to induce vibrations in the piano strings. Both of these projects aim to preserve the traditional advantages of the piano (polyphony, tactile feedback, rich acoustic sound source) while adding new dimensions of musical control.
Traditional analogue synthesis systems (Tom Webster and Peter Foreman)
The evening (9/02/2012, 7pm) will be a retrospective look at ‘traditional’ analogue synthesis systems, with some demonstrations and comparison between a pre-configured vintage hardware analogue synthesizer (probably a Roland SH101) and a modular analogue synthesizer (Blacet Research kit). Why the vintage synth is arranged as it is, and how you would emulate this arrangement with the modular.
Discussions of the advantages and disadvantages of pre-configured and modular systems, and also the influence that these systems have had on music software packages and software synthesizers. Comparisons between analogue and digital synthesis systems in terms of sound quality and operation. Also, If anyone has analogue synthesizer instruments (particularly +5v CV/gate systems) that they would like to bring along, they would be most welcome – we’ll have a small mixer/speaker arrangement, and I’m sure that we can figure out a way to hook everything up 😀
Bioni Samp: Hive Synthesiser
Bioni Samp demonstrates his Hive Synthesiser on 16/02/2012, 7pm.
Talks about its making, modular 6 oscillator design and its use in creating his experimental electronic music. ***Numerology of bees and beehive habitat patterns, hive logs and cycles are used as circuit starting points to make Music For Bees*** Followed by a short performance.
Bioni Samp is an artist, producer and video maker originally from Leeds, Yorkshire and currently resides in London. His electronic music has been published since 1995 and has had releases on various labels including, Aconito (UK/Italy), EMIT (UK) Harthouse (DE) Philtre/Kompakt, Instinct, Minimalizm (USA) and Noise Music (BR).
Ariel Elkin: AriVibes, a musical augmenter for iOS
AriVibes is an iOS app designed to enable users to musically transform the sounds of physical objects. In other words, it is a portable, self-contained augmented reality system that allows a user to shape and control the perceived timbre of objects, to enable a wide range of users to use any object as a musical instrument.
The musical augmentation proposed is achieved by altering the timbre of an object with audio effects, and offering real-time control over one or more augmentation parameters.
On Thursday, January 26th, I’d like to present the research and the design concepts behind its making.
And do a live demo with some beatboxing… See a teaser here:
- I forgot to mention, I make music too!
90MIN SONGWRITING CHALLENGE 19/01/2012
Tonight we had our second songwriting challenge. Two teams of music hackers who’ve never worked together before. 90 mins to write a song….
Here’s what happened!
Team 1: Ariel, Bushra, Jean-Baptist > Lullaby
Songwriting 19 Jan 2012 by Music Hackspace
Team 2: Andrew, Paul, Ziad> hsjamery.mp3 <–download
Hsjamery – 90MIN SONGWRITING CHALLENGE 19/1/2012 team 2 by musichackspace