Week 3: Meaningful Output


Building off our discussions and assignments to create opportunities that invite a range of participant expression, we continued this week with an exploration of how to architect circumstances for meaningful expression. We pinpointed the conditions and the explicit/implicit rules of interactions in our in-class activities and digital sketches to understand how they impacted experience. Some of the ways we can think about this impact include the types and ranges of expression asked of participants, what outcome(s) people expect from the activity, and also how they feel about it—in other words, what do they find meaningful and why? Watching a clip from the improvisational comedy show, Whose Line Is It Anyway?, drove home the point that the most fruitful moments of meaning (at least for us in the audience) arose when the actors were interacting with one another (as opposed to the solo performances). 

Working in groups of four this week, we were tasked to remix one of our expressive projects within a semantic framework of our own design. Our group—Yang, Aidan, Maria, and myself—had fun playing with everyone’s expressive output sketches, especially Maria’s voice-reactive example and agreed that vocal sounds provided the greatest of range of expression to explore. We wondered how we might combine her sketch with my expressive drawing sketch and after a little research with the P5 sound library, eventually arrived at a goal to create a collaborative game for two people to draw together with their combined voices. In our sketch one player controls the y-coordinates of the drawing tool with their volume (soft to loud) and while the other player controls x-coordinates with the pitch of their voice (low to high). Instead of leaving the drawing open-ended we decided to give players a canvas of dots to connect. Our game prioritizes meaningful interaction between two people (or groups of folks) as they figure out how to work together from a visualization of their combined vocal inputs to achieve a defined goal. If it sound ridiculous, it is, but it's just as fun as it sounds. 

So go ahead, and whistle the dots! Or hum them, or sing them, or yell them. 

To play, each player should navigate here on their own computers.

Remix on Glitch
Code also on GitHub 

Read more about our process here:

Part 1
Step 1: First, we set up bare-bones JavaScript files--a sketch.js and server.js. We decided that we did not require namespaces (separate input or output screens) as we wanted both the volume and pitch players to see the same thing. In addition, at least for this initial version, we did not see a need to keep track of multiple users in a users’ registry: though multiple people may participate, their vocal inputs combine to become one user that attempts to connect the dots. So our server.js file listens for browser clients to connect and disconnect, and most importantly, listens for volume and pitch data. This latter data is emitted back to all connected clients (again so everyone can see what they are collectively drawing together). We knew that our sketch.js file would need to emit volume and pitch data (functions called repeatedly in the draw() function) and also listen for that data from the server to move a drawing tool (currently an ellipse) around the canvas, also rendered in the draw() function. 

Step 2: With basic scaffolding in place, we declared and initialized a mic variable for the volume. In our initial tests, our ellipse successfully moved up and down, but it was quite jumpy. Fortunately Aidan remembered how to smooth mic input from a project last fall. We also included a variable for max volume with the intention of building out a calibration feature for users to set their own level. Finally, we mapped the emitted volume data with the idea that we might need to account for different screen sizes down the road. Right now, we’re working within a fixed canvas size. 

Step 3: Then, we turned to capturing pitch input. During some preliminary work I played with a p5.FFT object and the FFT.analyze() function to access an array of amplitudes for any given frequency. I calculated the average of the array to assign to the pitch variable, and while it did give us access to a different analog range of vocal input (compared to volume), we weren’t sure if we were actually getting pitch. A project from some 2nd year students lead us to pitchdetect.js, a library of sorts, that we incorporated into our project as an alternative. After cobbling it together with our code, we also smoothed the pitch data and mapped it’s output to the server.

Step 4: We also envisioned users setting their own input levels. So next, Yang worked his magic and created features for users to set their min and max levels for both pitch and volume. He also slapped on UI so we could display instructions for participants.

Part 2
Steps 5 - Million: We had another power group meeting to finish building out the functionality. Let's see if I remembered the highlights:

To begin we decided to provide the player with a canvas of ten dots to connect, but only "light up" one dot at a time to "hit" with the moving ellipse. Hitting that dot causes the next dot to light up and so on. Connect each lit dot in order and a line appears between them.

Because we originally envisioned many levels to our game, we created a Dot class to describe how the dots-to-connect should look and display and a Level class to handle each array of dots for any given level. This includes updating a dot when it's "hit" by the player, updating the next dot to light, and drawing the connecting lines.

Working locally on our machines, we originally generated  randomly-placed dots on every page refresh. Once we moved the project to Glitch, we realized that the location of the dots would need to emit from the server via a socket. So now each time a client connects, the server sends it an array of dots for the canvas.

Our project on glitch is...a little glitchy. The moving, drawing ellipse was much jumpier than when we ran the program locally. We slowed down the emit speed of the volume and pitch data from the client to the server from every frame to every third frame. That seemed to help a bit, but we'll see how it tests in class. 

We also discovered that we needed to constrain our mapped volume and pitch emit values; we were getting values way above our map-limit of 1.

Go team! And way to not get kicked out the library for screaming at our computers!!

giphy.gif

(Cheating here with my mouse to document the sketch's animation.)