Week 14: Painting Mirror

Move the pieces and assemble your reflections in this interactive Cubist mirror.
How many ways can you put yourself together?


Project Description
In real life and in digital spaces we offer multiple versions of ourselves for numerous reasons. Painting Mirror plays with this idea by proposing continuous ways of seeing and presenting yourself. Using dual camera angles and the open source software, Processing, video reflections are fragmented, resized, duplicated, and scattered across the surface. The pieces blend together and moving them paints the screen anew. The impulse might be to reassemble your image into a whole, but can you? 

This is a continuation of a recent project, which I desired to push further to see what I might uncover. Though there's more to explore, I learned a lot along the way. Here’s a recap of the highlights. 

Part 1: Leaving the Land of Rects
My first instinct was to move beyond the land of rectangles and draw irregular quadrilaterals. I learned that I could use the beginShape() and endShape() functions, with vertices of X and Y coordinates in between, to draw more complicated polygons (see the PShape Shiffman tutorial). Vertices are drawn in a counterclockwise direction starting with the leftmost corner. 

Using this Danny Rozin example from class as a guide, I adapted my original sketch in a similar fashion. The polygon constructor still receives random values for an X and a Y, but instead of using these as the location for the leftmost corner of the shape, they become its centroid. Random numbers are then generated for the calculation of each X and each Y of every new vertex. Those values are stored in a two-dimensional array or matrix and each corner of the shape is created (again in counterclockwise fashion) by subtracting them from or adding them to the X and Y values of the centroid. Though we’ve been working with image matrices all semester, this exercise strengthened my understanding of how I might use them other contexts.

In the previous version of this project, I used the copy() function to grab and duplicate pixels from the canvas into my rects. Danny’s sketch and this tutorial showed me how to load and map parts of the camera feed into my irregular quadrilaterals using texture(). As in the earlier version, I’m still playing with the scale of the newly-mapped areas when doing so. 

In order to employ texture(), I changed the default rendering mode to P3D, which uses OpenGL and increases the overall speed of my sketch.

Part 2: Oh Yeah, Mouse Dragging
Once I converted the shapes, it was time to update the mouse dragging. It was no longer a simple process of checking whether the mouse was within bounds of a plain old rect. After falling down several rabbit holes for a solution, Shiffman recommended two options: drawing all the polygons offscreen as PGraphics objects each with their own unique color to check or try the library, Toxiclibs.js. And then I found a third: Karsten Schmidt, the author of Toxiclibs wrote an example on the Processing forum taking advantage of this point-in-polygon algorithm. It’s possible that I came across this before, but it was too early in the quest for me to understand what I was seeing and what I needed. In any event, I incorporated it into my sketch, using another array to store all of the vertices of each shape, and it worked!

(I remember adjusting the offSet calculations, too. In my original version, offSets for X and for Y where based on the center of each rect (width/2 and height/2), but now that I had a centroid, it was easy to just replace those with the X and Y centroid values.)

Part 3: Adding Another Camera
One camera is great, but with two webcams I can show the viewer dual angles of themselves simultaneously. So I created a new class in my Processing sketch to feed the second camera into a new batch of polygons. The code is nearly identical except the part of declaring a different camera in the texture() function. Gosh, I’m repeating so much code! Therefore, there MUST be a way to rewrite it more efficiently. I haven’t wrapped my head around how to do that just yet...it awaits as a very fun problem to solve. (I’m also still wondering if I might want to treat the camera feeds differently, and at least for right now, keeping them separated gives me the space to think about the possibilities.)

Of note, I took heed of Danny's suggestion to use different camera models to avoid conflicts and headaches: I'm using a Logitech C920 and C922

Part 4: Taking Advantage of ITP's Touch Screens
It's one thing to develop within the comfort of your own laptop environment, but I envisioned this to display on a touch-screen for users to play with their own images. First I messed with the screens along the hallway to the lounge, quickly realizing that any new piece of hardware in the mix requires it's own special attention with regards to the logistics of wire connections as well as finding the right resolution to agree with my Processing canvas size. But those screens are out of reach for kids, and I so turned my focus to the large vertical-hanging Planar monitor near the elevators in ITP's lobby. It took some wire wrangling, HDMI port-finding on the back, installing of drivers, and help from Marlon, but we got it to work on one of the ER's MSI laptops, which was waaay faster than my Mac.

Instead of changing the laptop display to portrait mode to fit the new screen as I began to do, Danny offered the BRILLIANT suggestion of simply rotating the actual cameras and attaching (with tape for now) to the sides of the monitor. For my final Pixel by Pixel iteration, I attached one camera to the side of the screen and mounted the other on a tripod behind the user. Both camera feeds streamed onto the display, but to be honest, the "back" version wasn't as noticeable. For the Spring Show version, I plan to mount both cameras on opposite sides of the screen; folks want to find their faces so give them more of those, I will!

Part 5: Finishing Up & Conclusions - Is it a Mirror, Painting, or Portrait?
Standing before the screen, one's image is elusive: the polygons act like jumbled lenses, and it becomes a puzzle to assemble yourself. Distorting the face and figure is endless entertainment for me, but it's possible shapes are too sharp. Their jaggedness, however, is softened by the fading painterly effect of the combined blend modes.

Shortly after I began this project, I started saving screenshots to document my process and not long after, considered it a type of photo booth. Without this feature, it's simply a funny mirror, albeit one with plenty more options to alter your appearance. Since it's migration to the big monitor, I incorporated a double-tap feature to save the screen sans keyboard using this old forum post as a reference. Processing tells me that the mouseEvent, getClickCount(), is deprecated, but hey, it still works.

Thanks!
Thank you so much, Marlon! I'm very grateful for your time with the Planar. Also to Danny Rozin for the inspiration and to Dan Shiffman for checking in on his way down the hall. And to my many wonderful and talented classmates who stopped by watch, play, chat, and give advice: I am in awe of you everyday. 

Code on GitHub
Portraits created at the ITP Spring Show 2018 @paintingmirror

Week 13: American Landscapes

Trash. We toss it into the bin, to the bag, to the curb, and then to who knows where? But we do it everyday, and I was curious to see the dumping grounds. What do they look like? Where are they? How much do they contain?

So I built a tool to help me visualize some of the country's solid waste landfills, around 2,450 of them. Using the Google Maps API and data sourced from the Landfill Methane Outreach Program (a voluntary EPA program that identifies potential sites for the recovery of methane emissions as a renewable energy source), I launched an Instagram bot on Earth Day, @americadumps, to share satellite images of each site along with its state, latitude and longitude, whether it's open or closed, and its trash by the tonnage. More on the why and the how (including the code) in last week's post here.

Instead of try to read all the images at once in an image editor like Lightroom, I choose Instagram because the platform is designed specifically for pictures with corresponding captions. Setting my own delivery pace allows me to spend a deliberate amount of time with each image and share them with others in the process.

Distortion accompanies any type of representation, especially and including visual ones, and certainly we've discussed in this class how satellite imagery is processed and stitched together. In spending considered time with these landfill views (as of this writing there are 416 posts) in conjunction with their stats, I appreciate how utterly flattening the landscape is collapsed into the picture space. After the flattening, I noticed the numbers. What does it mean to cover 500,000 tons of waste? 1,00,000 tons? or even 100,000,000 tons? These are awesome figures: how does one begin to understand solid waste management at this colossal scale? The views from Google Maps isolates the sites from the very communities they serve, and so my focus became: how are the landfills situated in the landscape in relation to their communities?

From the records with available data, I decided to make some pictures of the ten largest landfills from this dataset, again as tool to continue my learning process.* After trying Google Earth Pro and Bing, I visited Google Maps through my browser and employed the tilt feature to reveal the horizon and to try suggest some depth. Again it's a great distortion, but I do appreciate Google's atmospheric perspective qualities to help me attempt these points of view. I aimed to locate each landfill with a sightline to the nearest metropolitan area. If lucky, I found a baseball diamond or tennis court in the foreground to provide a sense of scale.

Following up on each site through YouTube videos and actual photographs impresses upon me the enormity of these structures, and I'm finding ever-more questions to explore, including how to scale the data so that it's human-relatable. For example, how many trash bags fit into a garbage truck? How many tons do trucks transport? How many tons are deposited in a day, a week, a year?

Six of the locations are closed or soon-to-be, and my next questions also include what happens next? Where does the trash go now? How are the old sites maintained--what about those methane emissions and the prevention of groundwater contamination from leachate? Of the closed sites posted so far on Instagram, I looked up a few to find conversions into parks for humans, environmental restorations for local flora and fauna, and developments into golf courses, resorts, and even an airport. The story continues...

*A reminder for future Ellen that you made this decision after revisiting the splendid works of the celebrated American landscape painter, Thomas Cole, at The Met. Some of his paintings preserve an idyllic version of the wilderness and others foretell environmental damage due to encroaching development. 

@americadumps
Code on Github
View Slides

Related References
How Landfills Work
Land of Waste: American Landfills and Waste Production
Garbology: Our Dirty Love Affair with Trash by Edward Humes

...added during summer:
Invisible Menace, PBS NewsHour, 7/11/18 (touching on methane & landfills) 
Designing Waste: Strategies for a Zero Waste City, Exhibition at the Center for Architecture
Zero Waste Design Guidelines (for NYC)
 

Week 12: Word Ninja

Playtesting was a blast, and playing the game, which went quite fast for some groups, nodded towards familiar themes from the class, such as who leads and who follows? There's an awkwardness to negotiate as players try to authentically contribute to the conversation while sneaking in their words. Successful strategies included talking fast, first, and often, as well as responding to everything. Overall it forced us to consider what makes for good conversation.

Problem to Solve 1: Increase the Challenge
As always we received useful feedback to consider for our next iteration and final presentation. A key takeaway from in-class playtesting was it was too easy to cheat. As Hadar mentioned, we needed to develop an “element of danger around the saying words.” 

In our post-class meetings, we considered many options, but in the end, incorporated the following constraints: 

Converse naturally but keep the conversation focused a randomly-provided topic, now displaying once the game starts. Options include: career, politics, money, food, religion, music, hobbies, family and relationships, travel, school, and environment. We feel the topics are balanced between easy-going and possibly-more sensitive areas of discussion. Word lists, unrelated to the topics, were randomly generated and hard-coded into the JSON file. 

Players can catch others sneaking words, but at the risk of a penalty if they’re wrong. If one player challenges another, the challenged player must reveal their previous word, now programmed to appear in gray at the top of their screen (they can cover their current word with their hand). If the challenger is right, then they tap to their next word. If wrong however, then the other player taps to their next word.

Problem to Solve 2: Duplicate Words on Mobile Devices
Fortunately a few fellow ITPers were ready for a game break after we implemented the new conditions. It was quite successful, in that instead of racing through each round, participants took their time conversing. On several occasions, however, Dan reported that after a word tap, his new word would appear twice on his phone: once at the bottom AND also at the top, now reserved for the previous word. After nearly an hour and half of troubleshooting, we confirmed that the issue was on the input side; the server was indeed sending out one word at a time as intended. Not only that, but we were only able to reproduce the problem on mobile devices. We eventually determined (thanks to this recall from ICM to debug mobile Safari) that screen locking was forcing a disconnect from server and upon reconnect, any new words were duplicating in the input device's word array. The solution? For now, just tell players to disable auto-lock on their phones before the game start. 

For Future Development: Consider implementing a dictionary API to call for random words. Currently our game can be played for as many times as we have separate word lists, which right now is eleven. 

Go ahead, be a Word Ninja!
Play and remix on Glitch

 A quick demo: output screen above with two inputs below. Imagine each input is a mobile device and hidden from other other players. Try to sneak the word in white into the conversation. Once accomplished, tap for your next word. 

A quick demo: output screen above with two inputs below. Imagine each input is a mobile device and hidden from other other players. Try to sneak the word in white into the conversation. Once accomplished, tap for your next word. 

Feedback from guests and classmates during final critique: 
Reminded our visitors of Taboo and activities in the party game genre. Jellyvision Games also mentioned.

The instructions to get up and running (especially for our class visitors) could be more clear. It was suggested to put them on the "Play" screen.

There is confusion if a word challenge issued before the challengee taps to their next word. Players wanted to know how that person could "go back" or pick up a penalty word. We assumed that users would have already tapped to their next word and did not foresee this situation.

The conversation felt forced and awkward (oooh, but we like this aspect!), even with a topic. More constraints were suggested to structure the discussion. Maybe everyone tells a story together, each person adding on a sentence? Maybe the game is to say a sentence and have everyone guess the word.

The content of the conversation was inconsequential. It was also suggested to pose more specific, perhaps polarizing topics to encourage folks to express their opinions. 

Consider increasing the difficulty of the words as the game continues. 

Users reported feeling no pressure to say their words. Perhaps include a timer or add words after a certain amount of time. Perhaps provide feedback on the output screen indicating when a player (maybe anonymously) has one word remaining. Also refresh screen every 10-15 seconds to publish the already-spoken words.

Anthony mentioned that calling out someone on their word disrupted the flow of the conversation. Maybe there is a way to do this through the mobile interface instead?

Thank you, everyone!