New Work

The goal was to create a tool that adds performative options for people using a microphone. It was important to me that this device was simple to use, flexible in terms of capability, and potentially affordable if created at scale later on. The rubbery buttons are simple on and off. A performer will know tactilely when it is being triggered or not, but it also does not produce a clicking noise when pressed (that would be bad when they are right next to the microphone!). And since this is, in its current state, a fairly straightforward MIDI controller, it is relatively cheap and its function is customizable by the end user.

The form and ergonomics were important considerations for me. And not just the ergonomics for the users hand, but also for the microphone being slotted into the device. Typical fabrication approaches like screws that would go through a mounting panel and onward to the other side weren't an option: the inside of the cuff needed to be smooth so that it wouldn't scratch the microphone inside of it.

Ultimately this points towards more sophisticated methods in the future: 3D printing and/or vacuum molding. But I wanted to get more functional testing before going down that road. I made a few functional and non-functional prototypes:

And the current state:

This is made from heat bending acrylic around a microphone. A flat section is left available for gripping and button interaction. The grips are made from cork, which I had started to really enjoy as a working material at the time. It felt good to touch, and gave me some flexibility in my prototyping. I cut a kind of "slot" system into the cork, which allowed me to partially stuff the circuit boards into the grips. Initially, this was done as a test so that I could insert a button, try its positioning, and then move if necessary. But it wound up being pretty solid, and has lasted much longer than I thought it would. The cork sections were then attached to the acrylic via double sided tape. Again, meant to be temporary, but it held up through dozens and dozens of public user playtesting sessions(including a couple over zealous children).

The button circuit boards were normal perf board cut with a bandsaw, and measured with enough extra space to allot slot-style insertion as mentioned above. I modified an enclosure with standoffs in order to facilitate easy access and wire flow for the purpose of prototyping. I'm using a Teensy LC, but didn't want to make it permanent so I made a shield of sorts that used block terminals for the wiring connections.

The Teensy LC is setup to send MIDI messages as a class compliant USB MIDI device. This was an early decision made early on in order to keep usage open ended for the user. I used Max MSP to prototype an audio performance environment in order to show some of the potential capabilities, but in the future I envision being able to use the Mic Cuff Controller as a performance interface for other avenues: lighting, visuals, or other command/control protocols that a performer with a microphone might want to have access to live. I will continue to prototype new variations on the software as well as the hardware in order to explore potential performance scenarios.

With that being said, there is plenty to develop in the real of audio and music performance. For my coursework, I implemented a granular sampler that was controlled by the Mic Cuff Controller. For public testing during the ITP Winter Show, I setup different modes to test out effects that the microphone was being run through. And of course, an autotune mode. Well, it was a vocoder, but that didn't stop people from doing their best T-Pain impressions.

I'm really happy with how this initial round of ideation, design, fabrication and programming went. I've had the idea for this in my head for a long time, so to see it come to fruition in at least an initial form gives me motivation to continue. The feedback I've gotten so far has been encouraging and helpful.

Next steps will push me to expand my production toolkit, including things like 3D printing. There are many different directions that this can go, and I'm excited to not just follow one approach but perhaps execute a few different variations just to see what is possible.

“The Impulse and the Response” is a piece consists of two interactive sound pieces to be played in the browser alongside explanatory text. Two artists, Édgar J. Ulloa and Daniela Benitez, provided audio performances. A slider is available to change the amount of reverb that affects the audio that is playing. The reverb is a re-creation of the sonic qualities of the inside of the Statue of Liberty. In collaboration, my prompt to the artists was, “If you could do a sound performance inside of the Statue of Liberty, what would it be?” The initial version of this piece can be found here: The Impulse and the Response

Traditionally, convolution reverb is used to easily recreate reverbs of sonically pleasing spaces. Even experimentally, non-traditional use of convolution reverb focuses on innovative sound design results. I was interested in the potential for political or social commentary. Both of my collaborators describe themselves as affected by the United State’s immigration policy, social attitudes towards immigration policy, and the general climate of xenophobia. They wished to make pieces addressing these issues.

Giving them the opportunity to virtually inhabit this symbolic space not only gives potential to critique the space and it’s multi-dimensional symbolism, but uses the act of recreation of the space and it’s virtualization to match the physical, psychological, and political state of flux that many immigrants can face.

A kind of technical challenge is being able to communicate the technical aspects of the piece. The artistic metaphor completely relies on the technical understanding of how a convolution reverb works. However, most official definitions of convolution reverb err on the technical side and as a result are not always helpful descriptions to the average person. With this in mind, I decided that a brief but thorough artist’s statement that explains convolution reverb was essential.

Artistically, I had struggle with how to use convolution reverb in a different context. My initial idea was about the general potential of convolution reverb as an artistic message. I had less thought about the possibilities of recreating spaces, but more of putting audio “through” another sound. Since convolution reverb works by loading other sounds, called impulse responses, to create reverberations, you can put any sound into it for unexpected results. The sound of the arctic shelf cracking and falling into the ocean could be used as an impulse response in a sound piece about global warming, for example.

After discussion the general concept with my advisor Allison Parrish, it was apparent that there was the potential for very problematic usage of this approach. Not only in the procuring of media as appropriation, but the general feeling of a serious and fatal issue boiled down into a “sound toy” that you play with. Allison’s advice to hold the technology and the message in separate spaces while meditating on potential concepts was invaluable, and lead me to the current incarnation of the project. I feel that what I have is a much more positive approach while still having a strong social message, and is empowering to artists. This revolves around the second part of my project.

The piece is a single page browser-based experience. Explanatory text guides you through what is going on in the piece and introduces you to the sound artists. However, I always intended this to be a two part project. The other half is what I've called Convolution Reverb Online.

This is the prototype of a user interface that allows anyone to create their own versions of the experience offered in "The Impulse and the Response". Users are able to upload sounds or record them live. Convolution reverbs can be created by simply uploading your own sounds, without any technical knowledge needed. Convolution reverb can be applied to live or pre-recorded performances. With further development, my vision is to give artists more autonomy from technical collaborators.

While there are things to revisit on this project in terms of design, layout, and code cleanliness, I am happy with the results of this first approach. I am passionate about trying to reach new forms of artistic expression with technology, and I feel that the core ideas are facilitated by the technology successfully. Additionally, I'm happy I could implement this as a browser experience, for both the piece and the creation tool. This allows for greater reach of audience and access for artists. And again, much thanks to my collaborators Édgar J. Ulloa and Daniela Benitez.