Interactive Music Final: Noon to Night

My Interactive Music final project was called “Noon to Night”. First, a special thanks to Brandon Kader as the performer in this video. Our assignment suggested making a piece and instrument that would be played by someone other than ourselves, and I greatly appreciate Brandon’s performance.

Noon to Night is an audio visual performance, instrument, and proof of concept. At it’s heart, it is a timelapse of the ITP floor on April 24th, 2017. I programmed a Max MSP patch to take photos and record one second of audio every 15 seconds. At the end, I had enough content to cover noon to night.

Once I had my media, I programmed the performance interface in Javascript using p5.js and tone.js. A visual clock interface is controlled by dragging the mouse right or left. Dragging to the right increases the time, while dragging to the left turns the clock back. The second works on it’s own. For the current minute, four recordings play in sequence: the second captured at 0 seconds, at 15 seconds, at 30 seconds, and at 45 seconds. The corresponding sound and captured photo is triggered when the second hand is in each position.

There is a delay effect on the audio. The mouse position at the top of the screen increases the delay time, while at the bottom the delay time approaches zero. This gives an aspect of live performability and gesture that can be controlled in real time.

The approach from an instrument aspect is to create something that is casual and almost “browsing” in nature. When turned on, it makes noise and visuals on it’s own. However, the act of scrubbing and analyzing the time and things of interest in the frames and audio snippets, creates a different kind of performative engagement. This process of discovery isn’t precise, however. You cannot type in specific times, and the sensitivity of the mouse itself doesn’t lend itself to precision. This is to add more discovering into the use of the instrument. While you might want to know what was happening at 6:00PM exactly, there could be things of more interest at 5:59PM or 6:02PM that you would not have otherwise entered in by typing.

The conceptual approach is that of thinking of an entire space, time, or group of people as an “instrument”. Giving a set chunk of time to use as a tool to be manipulated and explored breaks our normal experience of time. But when treated like a block to be flipped, tapped and rubbed in the manner of an instrument, this span of time shows us something we may not appreciate under normal circumstances. Moments that might have been not noticed or forgotten, and more general, bigger picture sentiments of what it is like to be in this place with these people.

When made into an instrument, this allows someone to have their own process of discovery, finding their own individual memories or broader impressions. The breaking of time is a tool for the user, for them to gain a different perspective on a set time, place, and group of people.

Interactive Music Midterm 2: Gesture

“Keyrub”, by Dominic Barrett


A Tone.js DuoSynth with feedback delay and an 808 sampled drumkit

Playback control of digital instrument via keyboard keys (qwerty, not piano) with attention given to the “rubbing”, sliding, or gliding over keyboard keys.

Multiple keypresses can provide different musical control than individual keys on their own.

And certain keys can have more than one element of functionality (ex. playback and control signals with one gesture)

QWER section

Synth

The keyboard keys Q,W,E, and R control the note start of the synth. A pattern is pre-loaded. Pressing Q will make the pattern play backwards, while the R will make it go forwards. These are “Up” and “Down” pattern behaviors in the Tone API. W and E have a similar relationship, except they are type “upDown” and “downUp”, a kind of conceptual “middle” since they are in the literal middle of Q and R.

Already there was much to consider in terms of mapping. The pattern is an array of notes, and usually we would conceptualize the “start to end” of an array as “left to right”. This is analogous to “beginning to end” as a concept, and “up to down” in Tone.js parlance. However, if we think of Q as “left” and R as “right”, what would the appropriate mapping for sequence direction be?

Does the Q act as a “steering left”, where we think about the direction of the playhead being manipulated by our input? Or is “Q” the “left” starting sequence position, which then “goes forward” to the right? If we are “steering”, the pseudo code would be that the “left” key actually positions the current sequence position all the was to the *right* and then works it’s way *towards* the left.

And this is all ignoring the actual content of the sequence itself. Consider a series of notes that goes from lower pitched notes to higher pitched notes. The pattern is played by default, in the traditional and expected manner: “Up”. It starts at the beginning and when it gets to the end it returns to the beginning position.

However, take the same pattern object and re-arrange the composition of the notes to go from higher to lower pitched notes. While going “up” in playback direction, we are doing down (without quotes) in scale. “Up” is down and “Down” is up.

Ultimately, I wound up playing with the variables and pattern until it “felt right”. But I do enjoy playing these word games with myself to re-consider certain paradigms. Blog UI for example. It makes sense that you would show the most recent content first. But if there are two buttons at the bottom, where are they and what do they say? “Previous” and “Next”? “Forward” and “Back”? And which button is on the left and which is on the right? It is possible to have the “Next” page be from the past, or go “Back” into the future after navigating “Forward” a few pages into the past.

And then on top of all of this I think about different languages, where sequences of words can go right to left, vertically from top to bottom, or both.

…where was I? Oh yeah, the synth.

Q+W at the same time sets the sequence to the high note and then random walks. E+R goes low and random walks. W+E does a random walk in triplets. QWER all together do a super fast upDown.

And here is how we stop the synth:

ASDF keys

The keys underneath the QWER keys stop the sequence from playing. These have dual use. The A key sets the feedback delay time to zero, S to 0.02, d to 0.08, and F to 0.16. After setting the delay time, they stop the sequence. This can add a dramatic “end” to the sequence instead of a simple stopping of sound, and can introduce a point of performative design by rapidly starting and stopping the delays and sequences.

Drum Rubbing

YUIOP

YUIO each have a sequence of a snare and a high hat or a clap and a high hat. Instead of tapping them, they can be “rubbed” like a vinyl record on a turntable. All of the keys underneath YUIOP do nothing, giving the performer a chance to press and approach the drum keys in a back and forth motion. Sequences can be achieved by alternating keys or rubbing more than one at a time.

It isn’t perfect, and there can be missed keypresses or extras where none were expected. The P button is a single clap that can be used percussively. But also it resets the index of the YUIO drum sequences to zero.

Anchor drums

ZX

Z and X start and stop bass kick and high hat patterns respectively. Nothing in the program is quantized, so it is sometimes nice to provide somewhat of an anchor to a performance that could otherwise be much more chaotic without a kind of percussive backbone.

 

I was tempted to throw more keys into the mix, but didn’t want to go off the deep end with my first pass at this concept. It could be easy to assign a huge array of different functions to every single key and possible multi-key permutations. First, there seems to be certain system-level limits on how many keypressed can be detected at any one time. But also, once I got to this point, I felt that I had something expressive and what was missing was my own practice with the instrument instead of more features. I can imagine more than a few different takes on this basic concept, but for now I’m satisfied with the experiment.

I would also like to give credit to Hermutt Lobby for inspiration with much of their midi controller development work.

Score and Performance

For Interactive Music, we have been asked to make a score and perform it. I will be updating this space later on to document the performance. For now, this will be a space where performers can get the appropriate links.

The Score: All of Us, Together

The instrument: https://alpha.editor.p5js.org/full/rJ_CJB8Kx

 

This is my simplistic take on granular sampling. I’m viewing the concept of “grains” of sound rapidly playing over and into one another as a kind of examination of individual vs. group qualities and characteristics.

The score is pretty simple, as it is ultimately telling you to press a few buttons and make a couple noises with your voice, then letting you sit back and relax. Currently the composition is set at 2 minutes long. While it plays out to completion, perhaps meditate on the following ideas:

Can you hear your own voice in the mix of sound sources?

Can you make out other people’s voices in the mix?

Would you notice if your voice was gone?

Would you notice if your voice was the only one?

When does it sound like many things individually, and when does it sound like a single source of noise?

How many people are you performing with?

How well do you know them?

Do we have to choose between the individual and the social?