Temporary Expert in Junk DNA

To start off Temporary Expert, I pulled a random topic from the basket and got… Junk DNA!

I’m excited to be a four week expert on this. I’m segmenting this first blog post into two parts. My initial, uninformed ideas about junk DNA, and then my thoughts, feelings and followup after doing research.

At first, the thing that immediately draws me to the topic is the use of the word “junk”. Why is it junk? Says who? What are the motivations for such a categorization? I assume scientific, that this might be DNA that doesn’t “do” anything. But perhaps there are financial motivations for figuring out what certain things are caused by DNA, which could open up a possibility of critique.

My mind goes there because of “junk sales”, “junk bins” and so on. This term is evocative to me personal, as a somewhat hoarder, in the sense that I’m almost challenged to find something that I value inside of junk. “Junk” is almost a dare. Find something, it screams. The obscure piece of electronics. The dusty but solid piece of furniture. The cloth strips or old polaroids ripe for an art project.

And of course, ITP’s famous junk shelf comes to mind. I check it at least once a week (sometimes maybe two… three… ok, four times). It is junk, but is certainly useful to everyone on the floor at one point or another. With this in mind, I made this video:

What’s In The Junk?

This was made with my (not junk!) Macbook Pro and a quick Max MSP patch:

 

These are my initial, gut reactions to junk DNA. Was I right?


First, Junk DNA is known as a sub category within “noncoding DNA“. From the wiki, the key take aways are that noncoding DNA, “are components of an organism’s DNA that do not encode protein sequences,” and “When there is much non-coding DNA, a large proportion appears to have no biological function, as predicted in the 1960s. Since that time, this non-functional portion has controversially been called junk DNA.”

Oooh controversy! Lets look at the footnote for that.

ENCODE Project Writes Eulogy for Junk DNA

“In 2007, the pilot project’s results revealed that much of this DNA sequence was active in some way. The work called into serious question our gene-centric view of the genome, finding extensive RNA-generating activity beyond traditional gene boundaries.”

However, this article did not point me towards who even created the term “junk DNA” to begin with. Doing a quick search lead me to the charmingly old school “junkdna.com” (Marina, what was that about the internet being an unreliable appeal to authority?), and Dr. Susumu Ohno:

http://www.junkdna.com/ohno.html

https://en.wikipedia.org/wiki/Susumu_Ohno

And even though I thought that this topic might be a departure from more of my performance and sound based interests, I found this:

From the wiki:

“The biologist, with no formal training in music, ‘decided to assign notes according to the molecular weights’ and ‘put the heavier molecules in lower positions, and the lighter molecules higher’.”

This is an amazing bit of serendipity for me, as I have been interested in isomorphisms and event scores as a way to instigate creative inspiration and connection. The above paragraph reads to me like Fluxus in a lab coat, and I love it.

I didn’t expect to find this happening in the category of junk DNA. Not sure how much more I should indulge this specific aspect, but I’ll take it as a sign that I’ve got a great topic and the next few weeks will result in fruitful and inspiring research.


Components for understanding all of this will include understanding genomics, the differences between DNA vs. RNA, noncoding vs. coding vs. junk DNA, how gene/dna sequencing works, among many other things. (The more I read preliminarily, the more I uncover as necessary information. This list is expanding daily.)

Experts could include Dr.Ohno, the Encyclopedia of DNA Elements (ENCODE)(https://www.encodeproject.org), which is a project of the US National Human Genome Research Institute (NHGRI), and people who have challenged ENCODE’s work, like Dr.Dan Graur. His name came up in a few articles that I came across. (As one example; http://bigthink.com/paul-ratner/75-of-the-human-genome-is-junk-dna-claims-new-research)

Interactive Music Final: Noon to Night

My Interactive Music final project was called “Noon to Night”. First, a special thanks to Brandon Kader as the performer in this video. Our assignment suggested making a piece and instrument that would be played by someone other than ourselves, and I greatly appreciate Brandon’s performance.

Noon to Night is an audio visual performance, instrument, and proof of concept. At it’s heart, it is a timelapse of the ITP floor on April 24th, 2017. I programmed a Max MSP patch to take photos and record one second of audio every 15 seconds. At the end, I had enough content to cover noon to night.

Once I had my media, I programmed the performance interface in Javascript using p5.js and tone.js. A visual clock interface is controlled by dragging the mouse right or left. Dragging to the right increases the time, while dragging to the left turns the clock back. The second works on it’s own. For the current minute, four recordings play in sequence: the second captured at 0 seconds, at 15 seconds, at 30 seconds, and at 45 seconds. The corresponding sound and captured photo is triggered when the second hand is in each position.

There is a delay effect on the audio. The mouse position at the top of the screen increases the delay time, while at the bottom the delay time approaches zero. This gives an aspect of live performability and gesture that can be controlled in real time.

The approach from an instrument aspect is to create something that is casual and almost “browsing” in nature. When turned on, it makes noise and visuals on it’s own. However, the act of scrubbing and analyzing the time and things of interest in the frames and audio snippets, creates a different kind of performative engagement. This process of discovery isn’t precise, however. You cannot type in specific times, and the sensitivity of the mouse itself doesn’t lend itself to precision. This is to add more discovering into the use of the instrument. While you might want to know what was happening at 6:00PM exactly, there could be things of more interest at 5:59PM or 6:02PM that you would not have otherwise entered in by typing.

The conceptual approach is that of thinking of an entire space, time, or group of people as an “instrument”. Giving a set chunk of time to use as a tool to be manipulated and explored breaks our normal experience of time. But when treated like a block to be flipped, tapped and rubbed in the manner of an instrument, this span of time shows us something we may not appreciate under normal circumstances. Moments that might have been not noticed or forgotten, and more general, bigger picture sentiments of what it is like to be in this place with these people.

When made into an instrument, this allows someone to have their own process of discovery, finding their own individual memories or broader impressions. The breaking of time is a tool for the user, for them to gain a different perspective on a set time, place, and group of people.

Readymades: Video and Final Project

My final project for Readymades was a piece called “My Phone Your Phone”. This is an interactive installation that requires two users to simultaneously connect to a local web server on their phones and navigate to a web page. This page offers a few variations on simplistic animations that feature the words: My Phone Your Phone Mine Ours.

There are two chairs on either side of the installation. At each seat there is a tablet displaying the following instructions:

Instructions screenshot

Once two users are detected, the illuminated color at the center of the installation will begin to shift. Slowly, the table space in front of each seat will be illuminated with a projected video feed. A small wire camera is one of the features on each side of table, and what it sees is being projected onto the table.

The table itself is covered in a bedsheet. I chose a sheet textured in a manner conspicuous as a piece of bedding specifically as opposed to cloth that might be misconstrued as a neutral background. The tablets are cradled in matching pillow covers. The rest of the table includes: a triangle and circle shaped mirror, an iridescent see through acrylic holder around which the wired cameras are wrapped, a pile of mirrored letters that have all the letters from “MY PHONE YOUR PHONE” and their negative spaces, and a two foot long acrylic mirror with the phrase “MY PHONE YOUR PHONE” etched onto the mirror face.

In the middle is a frame, made from mirrored acrylic. It is three feet high, in a “portrait” orientation, and its base is under the sheets, seemingly emerging from them. At the base of this large mirrored frame is a pillow. Under this pillow and under the sheets is the lighting system.

The technical setup involved a machine running Max MSP. This machine is connected to an Arduino that controls a strip of neopixel RGB LED lights. The wire cameras are USB devices, brought into the Max patch and mapped/positioned onto the table surface using the cornerpin object. Mira frame objects are places over the animated images, composed of mutlisliders and text objects. Miraweb allows these animations to be accessed by users of the installation, and also keeps track of how many users are connected. Once two users are detected, the video feeds of the USB cameras are faded in and the lighting of the installation changes. Once less than two users are detected, the feed fades out and then the light returns to its ambient resting state.

 


This project is the end result of two previous approaches. Initially, I had been considering the concept of your phone as a sexual object. I word this specifically as *your* phone, because I didn’t want to simply say that “phones are sexual objects”. The theme I wanted to stick to throughout each incarnation of this project was that the readymade object is *your* phone. The mass produced object that I wish the audience to consider is the one that they call their own, which will be carried away from the piece itself.

A phone can be literally sexual in multiple ways: sexting, a repository of sexual images of ourselves and our lovers, a way to communicate with lovers, a way to find lovers, a way to consume pornography. But it has general descriptors of a sexual characteristics, as well. An object that is private, an object that you may hesitate to arbitrarily hand to a stranger, the thing in your pants that has your personal flora all over it, a method of connecting you to those you love, a voyeuristic venue, an exhibitionist venue, a kind of addiction, an object of self identity and self pleasure that can involve others to varying degrees.

In play testing different iterations of the build, there were plenty of good lessons learned on technical hurdles and practical approaches to user interaction for artistic ends. Having discussions and critique with the class, I found myself either going far too obvious in sexual metaphor and imagery, or over-correcting with something more subtle and perhaps losing the message. But it was this last concept that stuck with me: an object of self identity and self pleasure that can involve others to varying degrees.

I had considered installations that would hopefully guide two people to use their cellphones to act in concert with each other, but leaving a sort of ambiguity as a space for discussion between the participants as to what they should be doing. This, in my mind perhaps might create a natural consent exercise: two people negotiating and collaborating together for mutual satisfaction, or “making something beautiful together”.

But after being prompted to keep thinking about what the piece is really about, I wondered if my take was perhaps a view of 20th century-era Internet utopianism: the network as inherently collaborative, the intrinsic “togetherness” of networked machines as a deterministic march towards a future that arcs towards a progressive, interdependent future.

However in the 21st century, as a cellphone society we aren’t all uniformly “making something beautiful together” in line with these aspirations. Why should a piece about cellphones indicate otherwise? Though, I’m reticent to throw my viewpoint into full reverse. Transcendentalist redux, 2017 Black Mirror-cyberpunk rehash, and simple pop culture contrarianism lacks a sense of complexity and subtlety I believe the situation requires. Common prescriptions for our cultural cellphone ailments and fantastic predictions from Silicon Valley’s 1990 crystal ball both share a tendency to fix in place what a cellphone is used for, what they do to us and why we want to use them.

Then the meditation that resonated with me is: what a cellphone is for? And what is my cellphone about? And yours?

Is a cellphone used more to view others? Or transmit images of the receiver? In either case, are the underlying reasons selfish or communitarian?

Is a cellphone a selfie machine? A monitoring device? A library card to an electronic library of Alexandria? An escapist distraction?

Is a cellphone a mirror or a frame?

The reductive answer to these questions is that each person is different and uses their cellphone for their own reasons, even if those reasons may be unconsidered. And furthermore, may oscillate for any given user multiple times throughout the day.

In abstracting this concept, I decided to make a physical interpretation of a cell phone. A frame that is also a mirror that sits between you and another person. I have been describing it as a deconstruction of a cell phone that attempts to create a metaphorical play space.

The different elements laid out in front of the participant can be categorized in themes of cell phone usage. Viewing of the self, through mirrors. Viewing of another and their activity, through a frame. A tool to monitor, through a controllable camera. A place to play, using all items to create entertaining images with the camera.

Small, charm-like letters viewed through the camera can be thought of in a context of written text communication shared via screens. But also because they can be freely re-arranged, perhaps a consideration of online re-mix culture. Certain aspects such as this only occurred to me until after I had fabricated certain parts, played with the piece myself, and invited others to do the same.

This is in line with a broader goal that I have with this piece. I don’t want to say explicitly: the small letters are a metaphor for online remix culture, and thus this must be properly communicated to the audience.

I had certain ideas that were fixed, like the central mirror frame and your cellphones in the middle. But my hope was to continually consider a general “cellphone-ness” and imbue all of my artistic decisions with this attitude as a method of creation. If I kept in this mental mode, I hoped to not only create the metaphors I intended, but also have other relevant metaphors emerge. For example, during fabrication I was struck by the realization: how can I talk about cell phones and not have a text element in the piece? Having a pile of laser cut letters around while working on other things, fellow students were happy to play with them. Names, swears, nonsense gibberish and geometric patterns came and went with delight. Remixing as a theme then occurred to me.

And in this spirit, I wanted the end piece to have a playful quality with emergent themes. If people use cell phones in their own way, they should use the piece in their own way as well. To some, there may be more themes of reflection and perhaps narcissism. To others, there may be a simple entertainment of camera manipulation; a crazy, shiny, digital nonsense world for two. I wanted to see if themes could emerge because of a commitment to the metaphor, where the topic of the piece isn’t just the subject but also the way in which the piece should be created.

The only necessary thematic anchor, the phone of the audience, is placed in the center of the piece. My hope was to make it almost altar-like, yet also intimate, two phones resting on a glowing pillow surrounded by soft sheets in the middle of everything. If this can be the literal and metaphorical center of the piece, hopefully all meditations and ruminations on the piece and what is happening can be placed in the phone context.

My blue sky scenario for this piece would be to have someone to understand the cell phone context, engage in meditation on that theme, and share a metaphor had not occurred to me but still resonates completely.

 


I’m not sure if I have fully achieved my goals in these conceptual regards, though. Play testing this piece will be very important in terms of refinements and changes. But I am hopeful, as many things that I have noticed and have been called out to me are simple changes: swapping the projection feed positions, changing the size of the table, removal of certain unnecessary features, and adding more explicit cell phone related indicators to more successfully communicate the theme.

Readymades: Emotional Object Story

A story written as a prompt for this project:

It had been a long journey. A long life, really. There was plenty of time ahead, but there was no getting around the “before” and “after” that so clearly marked this plastic bag’s existence. Before the ocean and after.

Plastic bag didn’t remember much of being born or it’s earliest days. Who does, really? But the first memory of being pulled and fully exposed to bright light and bare air would always stick with the plastic bag. Previously stacked nearly two dimensionally flat against its brethren, it hadn’t been used to the world or much of its three dimensions. And all of a sudden, hands, weight, swinging, knocking. It had held some cans, vegetables, snacks and a receipt. It was one of the few bags that had more than one use. It had been repurposed to transport left overs before being thrown into the trash. In its first trash home, it shared some time next to those left overs. There were some other bags, scraps of food, napkins, twist ties, rubber bands and coffee stirrers.

It’s life space had collapsed again, however not in the orderly 2d dimensional sheets of its earlier life. It was now cramped in three dimensions. And it was also handled, but not the same way. A bag within a bigger bag, in a pile of bigger bags. Moved in a bin, a box, a truck, a ship. Motion was vague, detectable but far off. Muted tones and slight jostles marked legs of the journey that the bag could not see.

Too many times to count. And it didn’t matter, really. Most of it was simply dark and small. The bag didn’t mind. There was something to like about certain spaces, certain trash friends that would come and go.

But then there was the ocean. There was brightness and darkness. A free floating three dimensions but the reassuring pressure of the water surrounding it. And the fellow bags. The tides, their weight, the complicated liquid dynamics, all conspired to bring the bags together. Physics itself almost seemed to bend to make sure that the flock of bags would be together, slowly and luxuriously swimming in the middle of a vast ocean.

“Do you remember the hands?” they would ask each other, reminiscing. Sometimes they would twist into each other and play games. Other times they would simply be silent, and sway with the undulating current. But they were never mad or sad. They enjoyed sharing the memories as much as sharing the present. And when, occasionally, a bag or two peeled off from the pack, they were always happy and would wish each other good travels. “Goodbye! Thank you for everything! Tell them all about us here in the ocean!” they would yell as the departing bags approached the horizon.

There was before the ocean and after the ocean. Plastic bag liked the ocean very, very much.

 


For my assignment I want to attempt a “serene plastic bag”, (and yes, attempt to avoid the “American Beauty”-ization of the piece) with audio and visual elements that are calming. A collection of plastic bags, lit by blue light and slowly moving. The slow movements crinkle the bag, hopefully creating an ocean-like white noise.

The natural observation here is to recognize ecological damage posed by plastic bags in the ocean. I don’t want to confront this head on. In attempting to sanitize the aesthetics, I hope to create something that could be appealing on its own without any interpretation, but then can slowly reveal a grim reality upon noticing the details. Additionally, by creating a “fake” ocean, these bags avoid going into the actual ocean. Their (potential) beauty can serve an ecological purpose.

Readymades: Sound Object

For our second Readymades assignment, we were tasked with creating a “Sound Object”. Our readymade was to be given a personality using only sound as an output. Max was to be used as the platform for making the sounds.

After thinking constantly about the idea of a readymade (and seeing them everywhere), I decided I wanted to use a wicker basket that I owned. The basket had some compelling properties to me. It is stiff, glazed with some kind of plastic to make it sturdy, and somewhat sharp at point. But it also looks natural, has a warm color, and I habitually will run my hands across it to make different noises.

Lately I’ve also been thinking about “mapping” sensory inputs in different ways that could produce interesting results. For example, consider that your ears are a certain distance apart from each other. Now imagine if you placed two microphones a similar distance apart. If you increased the distance between the microphones, you might be simulating what it was like to hear when your head was that much larger. If you reduced the distance to half, or a quarter, you might be perceptually “shrinking” yourself by that amount. I decided to use these thoughts as a prompt for my sound object assignment.

Multiple microphones are placed inside of the basket. These microphones feed into a multichannel audio I/O Max patch, which then processes and routes the microphone input to multiple speakers positioned outside of the basket and around the viewer. A pre-recorded recording of me rubbing, tapping, knocking, and playing with the basket loops until the microphones detect noise. When noise is detected, then the loop stops playing, and the microphones are positionally routed to the speakers.

My attempt is to prompt a meditation of a “box within a box” infinite regression. When making noise, you can become aware that something inside the box hears what is going on outside. You hear these noises, as they are positioned around and above you in a square configuration. While an out of body experience can also be a meaningful appreciation of the piece, my true attempt was to invoke the realization that the viewer is also in a box (the room). This box is also in a larger box, the building, etc, conceptually stretching outwards into the concept of space itself.

There were some technical challenges in creating the piece in regards to sourcing the proper microphones, speakers and calibrating the noise levels in the Max patch. I am happy with this first pass, however. If an opportunity to further refine the concept presents itself, I would have a solid knowledge base to build off of.

Interactive Music Midterm 2: Gesture

“Keyrub”, by Dominic Barrett


A Tone.js DuoSynth with feedback delay and an 808 sampled drumkit

Playback control of digital instrument via keyboard keys (qwerty, not piano) with attention given to the “rubbing”, sliding, or gliding over keyboard keys.

Multiple keypresses can provide different musical control than individual keys on their own.

And certain keys can have more than one element of functionality (ex. playback and control signals with one gesture)

QWER section

Synth

The keyboard keys Q,W,E, and R control the note start of the synth. A pattern is pre-loaded. Pressing Q will make the pattern play backwards, while the R will make it go forwards. These are “Up” and “Down” pattern behaviors in the Tone API. W and E have a similar relationship, except they are type “upDown” and “downUp”, a kind of conceptual “middle” since they are in the literal middle of Q and R.

Already there was much to consider in terms of mapping. The pattern is an array of notes, and usually we would conceptualize the “start to end” of an array as “left to right”. This is analogous to “beginning to end” as a concept, and “up to down” in Tone.js parlance. However, if we think of Q as “left” and R as “right”, what would the appropriate mapping for sequence direction be?

Does the Q act as a “steering left”, where we think about the direction of the playhead being manipulated by our input? Or is “Q” the “left” starting sequence position, which then “goes forward” to the right? If we are “steering”, the pseudo code would be that the “left” key actually positions the current sequence position all the was to the *right* and then works it’s way *towards* the left.

And this is all ignoring the actual content of the sequence itself. Consider a series of notes that goes from lower pitched notes to higher pitched notes. The pattern is played by default, in the traditional and expected manner: “Up”. It starts at the beginning and when it gets to the end it returns to the beginning position.

However, take the same pattern object and re-arrange the composition of the notes to go from higher to lower pitched notes. While going “up” in playback direction, we are doing down (without quotes) in scale. “Up” is down and “Down” is up.

Ultimately, I wound up playing with the variables and pattern until it “felt right”. But I do enjoy playing these word games with myself to re-consider certain paradigms. Blog UI for example. It makes sense that you would show the most recent content first. But if there are two buttons at the bottom, where are they and what do they say? “Previous” and “Next”? “Forward” and “Back”? And which button is on the left and which is on the right? It is possible to have the “Next” page be from the past, or go “Back” into the future after navigating “Forward” a few pages into the past.

And then on top of all of this I think about different languages, where sequences of words can go right to left, vertically from top to bottom, or both.

…where was I? Oh yeah, the synth.

Q+W at the same time sets the sequence to the high note and then random walks. E+R goes low and random walks. W+E does a random walk in triplets. QWER all together do a super fast upDown.

And here is how we stop the synth:

ASDF keys

The keys underneath the QWER keys stop the sequence from playing. These have dual use. The A key sets the feedback delay time to zero, S to 0.02, d to 0.08, and F to 0.16. After setting the delay time, they stop the sequence. This can add a dramatic “end” to the sequence instead of a simple stopping of sound, and can introduce a point of performative design by rapidly starting and stopping the delays and sequences.

Drum Rubbing

YUIOP

YUIO each have a sequence of a snare and a high hat or a clap and a high hat. Instead of tapping them, they can be “rubbed” like a vinyl record on a turntable. All of the keys underneath YUIOP do nothing, giving the performer a chance to press and approach the drum keys in a back and forth motion. Sequences can be achieved by alternating keys or rubbing more than one at a time.

It isn’t perfect, and there can be missed keypresses or extras where none were expected. The P button is a single clap that can be used percussively. But also it resets the index of the YUIO drum sequences to zero.

Anchor drums

ZX

Z and X start and stop bass kick and high hat patterns respectively. Nothing in the program is quantized, so it is sometimes nice to provide somewhat of an anchor to a performance that could otherwise be much more chaotic without a kind of percussive backbone.

 

I was tempted to throw more keys into the mix, but didn’t want to go off the deep end with my first pass at this concept. It could be easy to assign a huge array of different functions to every single key and possible multi-key permutations. First, there seems to be certain system-level limits on how many keypressed can be detected at any one time. But also, once I got to this point, I felt that I had something expressive and what was missing was my own practice with the instrument instead of more features. I can imagine more than a few different takes on this basic concept, but for now I’m satisfied with the experiment.

I would also like to give credit to Hermutt Lobby for inspiration with much of their midi controller development work.

Score and Performance

For Interactive Music, we have been asked to make a score and perform it. I will be updating this space later on to document the performance. For now, this will be a space where performers can get the appropriate links.

The Score: All of Us, Together

The instrument: https://alpha.editor.p5js.org/full/rJ_CJB8Kx

 

This is my simplistic take on granular sampling. I’m viewing the concept of “grains” of sound rapidly playing over and into one another as a kind of examination of individual vs. group qualities and characteristics.

The score is pretty simple, as it is ultimately telling you to press a few buttons and make a couple noises with your voice, then letting you sit back and relax. Currently the composition is set at 2 minutes long. While it plays out to completion, perhaps meditate on the following ideas:

Can you hear your own voice in the mix of sound sources?

Can you make out other people’s voices in the mix?

Would you notice if your voice was gone?

Would you notice if your voice was the only one?

When does it sound like many things individually, and when does it sound like a single source of noise?

How many people are you performing with?

How well do you know them?

Do we have to choose between the individual and the social?

Towers of Power 3: Exploring Spectrum with SDR

We started playing with software defined radio! I’ve always been curious about them, and the in class demonstration piqued my interest even more:

Broadcasted cats! Courtesy of Dhruv’s two minute pop-up radio station at WITP 1015.2 FM, sponsored by internet memes and cats.

We were asked to look around on the spectrum and find “something interesting”. Thought it might be good to figure out my limits to start. GQRX is the application that interfaces with the SDR USB dongle. It seemed to automatically stay within the limits of the hardware, between about 23 Mhz and 1,740 Mhz. Starting from the bottom and scanning upwards, the first signal I spotted was at 23.583 Mhz:

The recording interface works pretty well in GQRX, but some of the signals were pretty quiet, especially compared to the static that surrounds it. I’ve posted some recordings, unedited besides normalization processing to boost the quiet audio to audible levels:

 

Moving upwards, I spotted a larger waveform and a conspicuous spike on the signal scope. I needed to adjust the receiving bandwidth to properly hear what was going on:

Wasn’t expecting music this clear so low on the spectrum, as traditional FM broadcasting does not go as low as 24.539 FM. This could be something like broadcasting muzak for a elevator, building, or shop.

I decided to switch things up, and start from the highest frequency and then work down. There was a lot of static at first, but then I spotted a possible signal:

At first, it sounded like nothing. Not nothing like just static, but nothing like no noise tucked inside of a lump of static. I was going to move on to the next signals when I thought I heard something.

People talking? I looked around and realized that it was a briefly delayed version of what was happening in the room I was in. I had to boost it REALLY loud for you to hear it as it is now. I tried refining the SDR settings and searching nearby bands:

I thought I was going crazy, but I was sure I was hearing audio of the room I was in. Luckily Dhruv was there to consult, and suggested that the wireless mics setup off of the floor could possibly be the cause. I shared this with Grau, who may be expounding on it in his post.

While I was talking with Dhruv, police officers came onto the floor. No one was sure why they were there, so he suggested tuning into their radios to find out. He showed me the following website to find the frequency for the local police radio band: http://www.radioreference.com

Interesting to hear, it almost seems like the signals switch back and forth between bands. I’m not sure if that was two different devices each sending on their own channel, or perhaps some kind of channel hopping system.

It was fun going in blind on the frequencies. I had decided not to google anything at first on purpose, but the radio reference is really interesting and I’ll be searching around for fun frequencies to look at. Especially things like wireless mics or cameras!

Readymades: Sound Object Prototyping

For Readymades, we started an assignment for making a sound based readymade art piece. The goal is to give an object personality using only sound as an output. I chose this basket:

 

I’m interested in using this basket for two reasons: highlighting the unknown and overlooked properties of the basket, and then using the experience to have people think about their own body and space as a result.

The basic tech diagram:

The broader idea is to think about sound in relation to spatial awareness and understanding. The wicker basket is hard, lacquered, and kind of sharp in some spots. However, with a close enough microphone, other properties emerge.

For the initial proof of concept I used a normal microphone with a rudimentary USB audio interface. With the mic right up against the side of the basket, even the lightest touch was heard. Things that are inaudible normally are very loud. And they sound soft, fuzzy, and kind of warm.

I wound up taking the recording and making a max patch as a rudimentary simulation. This video works best with headphones, or conspicuously placed stereo speakers.

I’ll speak more in class about some of my artistic goals. But I am interested in a kind of “mapping” or “scaling” of sensory input of the audience. Imagine the distance between your two ears. Now imagine they were microphones with nothing between them. If you moved these microphones closer or further apart, you could be simulating what it would be like to hear if your head was larger or smaller. (Rough simulation, of course). I’ve been thinking of this idea in terms of eyes and cameras for a while, but managed to realize the sonically equivalent approach for this project.

This is combined with a few other questions. Will the immersion into the basket sounds make the audience feel as if they *are* the basket, or *inside* of the basket? If they feel that they “are”, then would it feel like they are touching their own heads? If they feel they are “inside”, what kind of box (or basket) are they actually inside of?

Talking with Manning over the weekend also gave me a few ideas of playing with the sense of scale. But in general I want to experiment with a possible “out of body experience” approach and see what qualities are most compelling.

Towers of Power 2: OpenVPN

Our assignment was to get OpenVPN working on a device, connected to a pre-setup server with keys generated for us.

I installed openvpn and ssh on my virtualbox Ubuntu machine pretty easily. Using ps aux | grep ssh allowed me to see that ssh was running. I have a little bit of experience in bash command line, so things were coming back to me and I was starting to get comfortable.

Long story short, one of the steps in the directions threw us off. The wording as a little confusing and me and a few of my classmates spun off in different directions trying to read documentation and figure out if we were doing things correctly. It feels like no big thing when written out in a short paragraph, but we spent the better part of two hours trying to figure out “where the client.conf file was supposed to be generated?!”. Turns out a separate email was sent telling us to download an example from a github page. d’oh.

 

Don’t want to ramble about my troubles, but there are all kinds of fun hurdles when you aren’t truly experienced in the command line.

But if I needed to sum it up:

https://xkcd.com/149/

 

Though, I finally got it working:

Shout out to Mithru, who was a huge help. The command line is a cold place. Take a friend with you when possible!

 

The reading was Built to Last chapters 3 and 4. Glad we’re coming back to this book in the assignments. “Preserve the core and stimulate progress” was the take away for these two chapters.

Again, I’m not coming from a business background. But I am getting some good motivation and high level ideas from this book. While there are points where I think that the author dwells a little too long on one idea when it is probably safe to move on, I wind up appreciating the direct and emphasized summary of the philosophy. These aspects seem like they should be easy to grasp, come up with, and follow. But I’m sure many companies have thought that, and failed because they may not have truly adhered to the “core and progress” model in a way that is sufficiently fanatical.

I see parallels with personal art practices and life goals, as well. But even disregarding a translation into other areas I am interested it, it is nice to see these types of thoughts floating around in a business book. It is tempting to be cynical about business. I’m glad to see a earnest and potentially positive take on some of these issues.

Plenty to think about. Not sure how specific a core value can or should be, but I keep returning to thinking about what my company’s core value would be.