We’ve started coming in on the last half for IDS, and have been asked to start developing our ideas for final projects. While we have been studying Max MSP as our interface, we are not required to use it. However, I’ve been having a great time with it so far and I think it could be of great use for my particular interface.
For Intro to Physical Computation, I’ve been developing an idea for a physical prototype. It is a cuff that fits around any standard sized microphone that would also have input buttons on it. These buttons would be used as user input, so that a performer would not only be able to communicate with the microphone but also interact with software while performing.
I’ve had a decent amount of discussion with Benedetta and my classmates about more specifically identifying an end user, and hence activity. My claims of, “Anyone using a hand held microphone with a computer!” didn’t seem to make the cut. As I had originally conceived of this as a musical performance device before expanding my horizons, I decided it was best to bring back the concept to it’s original roots and stick with that for now.
However, for IDS, I would still like to explore the idea of giving users greater control over how their software and hardware works. Trying to balance customization, ease of use, and meaningful options for interaction seems like a healthy but worthwhile challenge.
For Physical Computing, I will try to focus on one use case for that project. But for IDS, I would like to iterate on the functionality and create multiple use cases for my physical prototype. In order to avoid casting too wide of a net, I will start by focusing on musical applications. The end goal would be to create multiple modes of interaction that are easily changed from mode to mode, customizable, and have all of this interaction clearly understood and easily used by a non-technical end user.
I will need to clear my idea with both professors in order to properly section this project into pieces that satisfy the requirements for both classes. It would also be good to clarify with Luke the specifics for IDS: How many different modes would be appropriate for this objective? How far down the “non-technical end user” rabbit hole should I go? Which existing software interfaces should I be aware of when trying to implement this idea?
Looking forward to seeing this idea come to fruition.
“Don’t fall in love with the laser! DON’T fall in love with the laser!” While I will never forget Ben’s words, I will say that at the very least I had a great weekend with the laser and I will be calling it back.
Having never used acrylic before, I was curious about making something that could accentuate the shiny, slick qualities of it. I had some EL wire around that I hadn’t gotten to use yet, and came up with a lighting idea. An acrylic cut shape with multiple holes in it that I could weave the EL wire in and out of. The wire seemed sturdy enough to hold itself and light amounts of weight, so I got excited enough about the idea to see how it would actually work.
I was really happy with my trip to Canal Plastics. They were a great help, there were a bunch of 24″x12″ precut sheets of laserable acrylic that fit our bed dimensions perfectly, and they offer student discounts of 10%.
I was advised by people in the shop to keep the paper on the acrylic while laser cutting, and given some friendly advice about speed, power, and frequency. I started cutting at 50/100/100.
I ran it twice just to be safe.
But it wasn’t budging, and the tape trick of trying to pick up cut pieces was not working at all. After further discussion, I dropped the speed to 30.
A few flareups on the surface convinced me thoroughly why the fan needs to be on. However, after two extra times on 30/100/100… still not budging…
Take advice, but cautiously. And double check everything. After 4 laser sessions, I went to the source and checked the cutting guide on the ITP website. Suggested speed? 15. Change the settings and here we go.
Woo! Really glad for that fan now. But it was working. The pieces were starting to fall on their own.
And the tape trick was working like a charm. I moved everything to a table and started popping out the little pieces. Using my hands was fun, like popping bubble wrap. But towards the lower right side I needed to use a screw driver for a little extra help.
And then I started removing the tape
I think that, in the end, I am happy I kept the tape on. Being able to see the laser cuts (or lack there of) was a real help when I was having trouble. However, it was a bit finnicky cleanly taking all of it off when there were so many intricate cuts. Worth keeping in mind for future jobs.
Next up was the EL wire.
I think there should be honest subtitles for classes at ITP. This should be, “Intro to Fabrication: Why did I think that one part would be easy?” Feeling like I was done, I started “just” threading the EL wire through the holes in my desired pattern. Getting to know the right amount of flex and force, trying to keep things straight, double checking my work, all while trying to be gentle with everything was more difficult than I thought it would be. Essentially, I’m trying to do a kind of EL wire cross stitching. It is possible, but just for the record, not nearly as quick as with needle, thread, and cloth.
But in the end the idea works, perhaps with some managed expectations. I think I might try to iterate a bit on this to help manage the wire more, but I like the effect overall in a dark room.
My concept for the ICM final project is an installation piece that uses the noises of important, controversial, political and or emotional issues to create reverberations (echos). A user would be prompted to read about these issues into a microphone, and then listen via headphones to their own words, put through the reverberations of these issues.
Use charged or confrontational news stories in an attempt to create a different kind of empathy using emotionally charged impulse responses in a convolution reverb system. The user will be effected, aurally, by the topic. Forcing to hear one’s self through the effect of the topic may perhaps create a different kind of connection. However, a sense of distance is still maintained. The user speaks with their own voice, accompanied by the echos of something serious.
“Political tools”?
Is a reverb supposed to be accurate? Is a tool not supposed to have an agenda? Is there a place for an opinionated tool?
Sound as data, data as agenda.
I am interested in data auralization, as opposed to data visualization, as a method of conveying data. While there are more utilitarian and straightforward aspects of this technique, I see no reason not to make a statement with the sound data and present it in a manner that unapologetically conveys a message.
“Hard to talk”
Depending on the impulse response in question, it could be difficult to hear or read the story given. However, these are difficult stories to hear or read without any audio effects at all.
Issues
Education
The conceptual core of appreciating the work is as much about communicating technically what is a convolution reverb, as it is about the metaphor of using it. Educating the user about convolution reverb before the artistic confrontation is necessary, but presents it’s own UX challenge.
Copyright
I am not entirely knowledgeable on the specific details of how sampled/remixed/referenced media is ethically and legally implemented in media art on a professional level. Time needs to be allocated towards getting permission to use works and researching publicly licensed media.
Topics
Are there topics that may be inappropriate for use? Would certain topics allow for easier media collection? Are there topics that may lessen the potential for harmful artistic appropriation (climate change could be less appropriative, when focused on animals or objects).
I had some great feedback after presenting in class, and will definitely try to refine the messaging and specific UX flow over the next few days in order to get started.
Our task for this week in intro to fabrication was to make multiples of a design (at least 5). I have had an idea for what I’ve called a “percussive wand”, a tool that allows a DC motor and attachments to be used as a musical instrument. At first I thought about the multiples challenge in terms of something that would fit as a set of things. However, it occurred to me that if I wanted to do user testing of a prototype, it might be good to have multiple prototypes to give out.
Credit Joe Mango for help thinking about how to screw in the handle
Starting Material
I went to Midcity Lumber and picked up 8ft of 2×4 pine. I figured that each wand would be approximately 1ft long, so to acheive 5 solid pieces I bought more than I needed.
Process
Midcity does $2 a cut. For the sake of my fellow MTA riders, I had the 8ft cut into two pieces. When I returned to ITP, I started measuring initial cuts. From 4ft to 1 ft pieces.
But the Midcity cuts weren’t perfect. I had initially thought I would be clever and use the mitre saw to cut both pieces at the same time. However, after marking the middle point of both pieces, I realized that they weren’t *exactly* the same length. It wasn’t that much more work to individually cut each piece, so I opted for that.
A little improvisation when I needed a straight edge to draw the cut…
Will definitely need to sand these! I knew cuts were rough, but I was surprised at the amount of splintering.
Moving onto the 45 degree cuts. This creates the handle for each wand, allowing a sloped end that meets up with the handle. Additionally, it saves material.
Weird… didn’t notice those little dots before. Wait! I forgot a shim when clamping!
Measuring out positioning the mitre saw, when I realized… “Hey… this top one moves!”
I cut 8 pieces and hit my first Bad Pancake™ ! When sawing into one of the pieces, it seems like I hit or dislodged a gnarly knot inside of the 2×4.
I had my first pieces, and then did an initial sanding.
Can I save a pancake? I used some wood glue and clamps to see if it could be salvaged. Had to go with the speed clamp because the heavier clamps kept tipping everything over.
Did it work? Not like this! I waited 30 minutes as directed and the piece came apart pretty easily. I proceeded to totally over compensate, making a bit of a glue-y mess in the process. I waited longer for the clamping and managed to save the pancake! Maybe not the best choice for final production, but perhaps useful in my jig process later.
Making a jig
I was trying to find a good way to position my pieces and keep them stable as I did my first drilling. This is where you can really see that each piece is certainly not “exactly” the same. This ain’t photoshop! No copy and paste here.
This made designing a good jig a little tough. I needed a tight fit, but thats hard to do when things aren’t exactly the same. Had to mess with it a bit… and then it failed on me. Turns out I wasn’t putting enough screws in it, and it started moving on it’s own. Not helpful, jig. I decided to re-think my approach.
Back to the drawing board. I never lost faith in you, bad pancake. Your second life in service to the jig will never be forgotten.
The first (full) pancake
Before duplicating my process, I wanted to test it. I used one small drill bit as a pilot hole, then used a 1/4″ drill bit to just hit the top and allow for a screw head. Then I screwed that, moved the handle, applied glue between the handle and the body, and then did my drill and screw routine on the other side. Then clamp as the glue dried. (And slowly learning my lesson about using too much glue.)
Assembly line
At the end I had a decent piece that felt sturdy in the hand. Then I started prepping everything for the assembly line. Marking out drill spots, arranging my clamps and double checking the workflow.
Stain?
I brief intermission followed as I had to go home for the day. I took a piece of wood with me and tested some stain that I bought, since we aren’t allowed to use stain in the shop. After an overnight drying, it was interesting to see the results. I kind of liked the color, but I feel like it would do well with a waxing. Ultimately, with some other issues outstanding (more on this later), I decided to go against staining. If I needed to make changes and alterations, it would just mess up my stain job, I’d need to re-apply, and make things even more messy and complicated. Interested in using it in the future, though.
Glue n’ Screw
With that tangent aside, I set up the workflow for the rest of the pieces. Going to the shop when no one is there is *very* useful, as you can take as many drills as you want. I laid them out in order: pilot drill, space drill, glue, screw.
I even made a little makeshift shelf for my small clamp so it would be elevated and within arms reach.
Then I used cardboard to protect all the pieces and clamped them up for a while.
Sand again
I can almost hear Ben’s laughter now. None of these were “exactly” the same at all. Handles weren’t perfectly lined up, and gaps between the handles and the wand were visible and varied from piece to piece. This didn’t interfere with the base function, however. I decided to sand them down to make them as flat and uniform as possible, and additionally soften sharp edges that would be gripped by hands.
The heartbreak
Why Amazon? Why? Whyyyyyyyy? One of my sets of components had been damaged in transit and was being re-sent… for delivery on Wednesday. Additionally, there were some issues with delivery of the other pieces. I now know more about the ITP, NYU and USPS mail system than any sane person needs to. But I was sweating a bit as well. This is what I was keeping in mind when I didn’t stain: if I had to change things around, I might need to make more cuts or change the design in order to get something done by Tuesday.
Time to improvise
As of this writing, I am waiting on a few other pieces of gear. Depending on when I get what I get, I’ll need to adjust my final project. Will I get my materials on time? Will I have to radically change my final design? Will Ana up front ever forgive me for pestering her incessantly about when packages are getting in? Find out next time in part two of… The Multiples!
(During the intermission, I will solder the components that I *do* have)
Thank your shop staff: Dhruv
I have been blessed by the friendly shop staff! Dhruv helped me figure out an alternative mounting solution that would replace my broken/missing delivery.
Essentially, we found a threaded bolt that was the perfect size for the clips that I have. Dhruv helped me get the right drill bit, and I went at testing the theory.
Yes! After the proof of concept, I was ready to continue. First marking, then drilling, and of course going back to the sander yet once again.
I found the right tool for the job (ratchet, and then wrench for more delicate scenarios). Then, I laid out everything again to do my assembly line round 2.
Not without a brief heart attack thinking that I had permanently attached a ratchet head to one of my pieces. (It was really… really stuck on there.)
And I got it working! Except…♫one of these things is not like the other, one of these things does not belong♫
(Yes, it annoyed me enough that I fixed it. Luckily my design made it simple to unscrew and re-screw everything back together the right way).
As of this writing, I had gotten this far but still hadn’t received my packages yet. Desperate, defeated, and considering an expensive run to Tinkersphere. But just when I was getting ready to give up, I received a call… from the Campus Mail Services!
I’m not sure I will be able to post everything completed before 6pm tonight, so I wanted to get this blogged least to show how much I have gotten so far. I’m confident I will finish tonight and have something to show tomorrow morning. Updates to follow, stay tuned!
And now, for the thrilling conclusion of… The Multiples!
Motors! Battery cases! And just when I thought it couldn’t get any better, Ana hand delivered my batteries to the shop. Lets get these guys on!
Of course, it would have been better to not have the screws in. But instead of undoing and redoing those, I found some wood scraps to elevate as I cut.
With my first test set, I was ready to duplicate.
Lots and lots of soldering for the wiring
The final product(s)
I had some material I was trying to attach to the motor in order for it to hold different percussive mediums (brush, wire, etc). However, that did not work out that well. I think I’m going to set these aside for now and perhaps take more time in approaching that specific feature in more detail later.
*Phew!* I got something to show tomorrow. Certainly learned a lot. Can’t wait to discuss more and see what the rest of the class has cooked up.
And that concludes the thrilling tale of The Multiples!
This week’s homework was about loading and manipulating media. There were bonus points for not doing a video mirror or a sound board, so I decided I was going to avoid that.
To completely reduce the temptation, I wanted to load video clips instead of rely on webcam footage. Using the techniques Allison outlined in class, I wanted to address each pixel in these loaded videos to either display them or use their information to inform functionality.
I spent a lot, a LOT of time figuring out what videos would play nice with p5js. Many, many trips back and forth to and from Windows Movie Maker, cutting the length, shrinking the size, reducing the bitrate. I had a feeling that this might have been an issue with the online editor specifically, but in any case I was ready to deal with that limitation. After getting the video to play normally, I was ready to start analyzing each frame and draw.
I was getting some inconsistent crashes with errors like these: “Exiting potential infinite loop at line 36”. It seemed to take issue with the for loops that were going through the frames, but it’s behavior was making me wonder if p5 was simply stalling out when faced with too large of a task. And now that I type this out, I realize that I haven’t had these errors with my desktop PC, only my laptop.
Being able to analyze the contents of the pixels and make decisions accordingly was of great interest. In playing around with what was possible, I wound up making somewhat of a ‘bonus’ sketch just to see if I could:
It isn’t pretty, it isn’t smooth, but it is technically a functional green screen.
With my meme break out of the way, I used my newfound pixel hunting abilities towards a more serious piece.
I have some NASA footage of a spaceship-level view of the earth. It needed to be downsampled a great amount in order for it to find it’s way into p5, but it wound up being ok for my purposes. This sketch loads the video, and then displays the video in a lower res field of dots. If you move your mouse, you can adjust the shape. As all sketches start with the mouse at 0,0, it can look like nothing at first and then dramatically stretch into view upon interaction.
Pixels in the video are analyzed and placed as circles. If the program detects that the given pixel is above a certain threshold of whiteness (in this case, the clouds), those white circles are larger than the ones that are not white. Then there is a “scan line”. It is an area defined in the sketch that is looking for these larger white dots. If there is a larger white dot detected in this area, it is highlighted with a green circle. A droning note is played with a different pitch depending on it’s location on the X axis.
There are still some bugs. As I moved development to my desktop, upon returning to the laptop it seems I cannot run the sketches. This is why I have added videos of the sketches running, just in case they cannot be seen on the presentation laptop. Also, when running smoothly, the NASA video sketch doesn’t always play the notes with the correct pitches. It appears that if you use the rate() and play() too rapidly in a row, sometimes the rate() command is ignored and the sound file is played at the original speed.
However, despite these setbacks, I am happy with the conceptual end result. I am interested in different ways to communicate data, and the idea of data “sonicalization” as opposed to visualization seems like an intriguing pursuit. Something like this could measure weather patterns, the health of a forest, or air quality conditions. The stylized output, visually and aurally, might create new opportunities to motivate people to digest these findings.
After completing our midterms, we came away with a lot of real good experience in terms of feeling what is and isn’t possible within a certain time frame. With these lessons in mind, we are now given more time to complete our final projects and more specifically plot out our goals, intentions and options. The first part is clear ideation of what we might want to build, so we can discuss feasibility and research solutions.
For a while now, I’ve had an idea for a device that I have wanted to create. Essentially, it is a “cuff” that would fit around a microphone that would have different inputs on it. This would allow for the person using the microphone to control actions of a computer in their performances. Seeing as there are all kinds of “digital accompaniment” ranging from Power Point slide shows to full on theatrical lighting rigs, I thought that giving the speaker or singer more direct control could help create more of a direct feedback loop. One that isn’t broken by visits to a laptop keyboard, mediated by a tech person behind a big board, or ones that don’t exist at all because a performer has to keep up with a pre-made track.
The basic approach is that of enhancing the basic mic clip. If there were inputs on it, a performer could use them. If it was built like a standard mic clip, it could fit all standard sized microphones. With proper matching software, it could adapt itself for all different kinds of applications.
Part of our first step is research, and seeing what already exists. I didn’t know if it, but I wasn’t surprised to see something like this:
I’m intrigued by it, but it strikes me as too proprietary, task specific, and expensive. I think that the capability of this should be able to be achieved by what I’m proposing through paired software, but not *only* fixed to the realm of manipulating the audio of that microphone.
I kept looking, not finding much more than that, until I found this.
There are always those moments where you say to yourself, “Damn… but I really *did* think of it on my own!” The overall approach here seems to be the direction I want to go in. A mic that sends data. However, as I kept watching, and researching, I became a little more optimistic. First, I don’t think these controls should be built into the mic itself. And second, this appears to have been issued as a prototype, but I haven’t seen it released anywhere (as of this writing).
So I think that perhaps the spirit is the same, but I have some important distinctions that I am opinionated about. I think having a proprietary mic that can do this is a mistake. Letting people choose their own preferred mic, and then temporarily enhancing its function is more flexible. Also, building the control only greatly reduces cost, which would make performers and presenters much more likely to try it. Additionally, with these aspects in mind, there could be a series of different kinds of layouts. If the costs were low enough, maybe there could be a variety of approaches: one with only buttons, the other with sliders, the other mimicking a piano key layout, etc. This might be able to cater to many use cases.
And this isn’t even bringing more complicated input and output, like bluetooth or accelerometers. But again, perhaps that approach can be more appropriate for some performers than for others. At the end of the day, my user is a vocal performer (yes, even the bland Power Point speech) that may need more interaction with their digital counterparts. I think even though there seems to be things that have kind of touched this realm, I haven’t yet come across something that is truly close to what I’m proposing when thinking about the microphone specifically.
Looking forward to bringing this idea to the class and seeing what people think.
Excited to go into a new class for the second half of the semester! Ben started off Intro to Fabrication with a really wonderful story about a flashlight he made for his grandmother when he was a child. Our first assignment was to build a flashlight of our own. The definitions have been generously outlined as something 1) portable, that 2) creates light.
Inspiration
I like the idea of things that are dual use, to save on resources, space and money. Maybe not the best example, but I’ve always been drawn to the glowing umbrellas in Blade Runner. A light and an umbrella in one! (Not sure how often that is needed, though…)
As for this assignment, I thought it would be nice to create a flashlight that wouldn’t only sit in a drawer for most of it’s life. A more practical approach would be something like a lantern. It can be portable light when you need it, but can also sit stationary as a lamp.
A simple paper lantern
Drawings
Maybe there could be a specially shaped lamp that could have the functionality of a flashlight when desired. A lamp type use case when needed, and directed light when desired.
What I came up with was essentially a “standing” flashlight. Hold it like a normal flashlight in one use case, but when you place it light-down there are affordances to let certain amounts of light through.
Raw Material
I enjoy having clever ideas, but I am skeptical of “clever” ideas. What if there is a good reason I haven’t seen this kind of design somewhere? I thought the best initial incarnation of this concept would be cardboard. Test out if the general form is even a good idea to begin with.
The cardboard is brought together by duct tape. The inside of the piece has tin foil on the sides to increase reflectivity, which is held into place by duct tape and normal clear tape. There is also a little patch of velcro for functionality I will outline later. At one point I used to styrofoam to test out some functionality, but it did not stay in the final design.
The light source is a breadboard with 4 yellow LEDs, appropriate resistors, a 9 volt battery source with appropriate connection terminal, and a “soft touch” switch. These are connected via stiff, on-board jumper wires, a looser jumper cable for the switch, a screw terminal to connect the switch to the wires. I used some alligator clips during testing of the circuit, but they did not stay in the final design.
Mid-Process
The questions started almost immediately, as I was unsure what would be a good size for the frame. I wanted something small enough so that I could hold it in my hand but large enough to justify as a stationary light source. I wound up going with 4.5 inches.
I wired up my breadboard with 4 LEDs and a 9 volt battery and shone the light through my cardboard tunnel. Not too bad.
Then the tin foil was added to increase light reflection. Not really sure this made that much more of a difference. To my eye, it was more than these photos might convey, but certainly a minimal difference. However, the light was coming out and was forced into a direction, so I decided there was enough functionality for now.
Cuts were added to the bottom 3rd of the cardboard to create the lamp legs, and measurements were made in order to cut a piece of styrofoam. This was a quick way to punch out a hole and rest my light on top of the structure to see how the lamp functionality worked.
Getting there. But the legs need to allow more light out. I measured and drew my window holes, and cut with my box cutter. Not without some issues…
But eventually they were all clear. This was the moment of truth: would the legs hold the weight of the entire light source?
It stayed up! Also, this gave me a moment to turn off the lights and test the LEDs. Glad I went with the yellow, which provide a much warmer glow.
The switch was tested with alligator clips, then made more permanent with male to female jumper wire, a block terminal, and the soft touch switch. I cut another piece of cardboard to make a more permanent top, and attached the breadboard to it via a patch of velcro.
A cross pattern cut was made, which allowed me to push the push button base through while keeping as much tension as possible to hold the button in place.
Then I taped one side of this ‘lid’ to the top of the structure. Instead of taping the entire piece down, I created a kind of flap out of duct tape on the other side. This attaches to the side of the light with velcro. This allows access to the inside, for electrical troubleshooting, replacing batteries, or adding/removing LEDs.
List of tools used
The star tool of the show was a box cutter for going through cardboard and styrofoam. Pencil and measuring tape for marking things out, along with a hard ruler for marking as well as being a hard object to bend cardboard around in straight lines. A very small flat head screwdriver was used to secure the block terminals that held the wiring. Alligator clips were used for the testing of the circuit.
Final Images
Despite its cardboard prototype appearance, I’m happy with the result. The variations on this concept are many; differing sizes, leg/body ratios, and patterns for letting light through. Getting something together, even if only in cardboard, gives a good opportunity to really feel what design choices can resonate the most.
Our assignment for this week was to work with external data: loading JSON files with data sets and using them to visualize the information within. Getting familiar with using APIs lets you pull JSON files from popular websites and web services.
The example videos from Shiffman seemed to make sense, but it wasn’t necessarily smooth following along. It seems like the New York Times API documentation and services work in a slightly different way than when he had used it. This meant that certain code didn’t quite work.
Instead of providing a solid example URL, you are given the following code:
var url = “https://api.nytimes.com/svc/search/v2/articlesearch.json”;
url += ‘?’ + $.param({
‘api-key’: “a74472e8eca541b2b2577c690b887abe”,
‘begin_date’: “20161025”,
‘end_date’: “20161025”
});
$.ajax({
url: url,
method: ‘GET’,
}).done(function(result) {
console.log(result);
}).fail(function(err) {
throw err;
});
Using the $ didn’t seem to play well nice with p5, so I needed to piece the puzzle together and write the proper URL “by hand” to see if I could get a proper response JSON file. Fun fact, it seems like APIs that are mis-written in certain ways don’t just refuse to work. When I was looking for article headlines by date, my malformed query would respond with today’s headlines. Of course, I used today’s date as my initial test to see if I could get information from the API. So it wasn’t until later when I was playing around with different day requests that I noticed nothing I put in was working.
I was a little distracted by that because I had spent a good chunk of time trying to figure out the spacing of my project. And because of all the GIFs. What GIFs, you say?
My idea was to use not one, but two APIs. The user can enter a date to get a headline from the New York Times on that day, and then each word is run through the Giphy API. Each search returns an image that puts a GIF on the DOM, creating a chain of GIFs that are “translated” from English. Perhaps this technique can help revitalize the print media industry.
The main issue was the timing of the different API calls. There is still a bug where the GIFs aren’t necessarily placed in proper order. Clicking the button again can remedy this, and the bug doesn’t always show up. It seems that simply the time required to start a new API call for each word and load the GIF messes with the sequential nature of the loading. I spent lots and lots of time trying to figure out how to avoid this and make sure that the images would always be in the right order. Still haven’t managed to figure it out, but in general the proof of concept is functional enough and fun to play around with.
Our lab this week was continuing to work with serial communication, exploring a “call and response” system for both sides of the Arduino/Computer relationship. Admittedly, I was getting used to simply spamming the serial ports on all sides. Cleaning up things and creating better habits around this will help for more the more intensive applications I may be creating in the future.
After figuring out the lab examples and tutorials, I had come up with an idea for serial communication. My concept for working with this relationship was to think of the Arduino as it’s own kind of artifact. What if an Arduino didn’t have any other kind of input or output besides serial messages? I did some research and it seems that programs written to the Arduino’s memory can’t have their source code retrieved off of the Arduino itself. What if the Arduino could hold certain secret information, and you could only access it by speaking the correct words through the serial port? If you deleted the original source files on your computer, perhaps this could be a kind of information security.
This sprang from my idea of expanding into different kinds of “call and response” messages. Instead of the suggested “hello” and “x” send and receives, you need to answer a riddle. I used The Hobbit’s famous Gollum/Bilbo exchange of riddles as my set.
Honestly, this is where things seemed to fall apart. I can send to and from P5, I can use DOM elements to type serial messages to the arduino and create text on a webpage. An initializing “Start Game” gets things going and moves onto the next step. But after that inital volley of call and response it seems to get stuck on the first question. I tried re-structuring the conditional statements, playing with switch cases, but haven’t seemed to break through just yet. I’m going to keep working on the general idea though, because I think there could be some promise in it.