Week 14: Painting Mirror

Move the pieces and assemble your reflections in this interactive Cubist mirror.
How many ways can you put yourself together?


Project Description
In real life and in digital spaces, we offer multiple versions of ourselves for numerous reasons. Painting Mirror plays with this idea and proposes continuous ways of seeing and presenting yourself. Using dual camera angles and the open source software, Processing, video reflections are fragmented, resized, duplicated, and scattered across the surface. The pieces blend together and moving them paints the screen anew. The impulse might be to reassemble your image into a whole, but can you? 

This is a continuation of a recent project, which I desired to push further to see what I might uncover. Though there's more to explore, I learned a lot along the way. Here’s a recap of the highlights. 

Part 1: Leaving the Land of Rects
My first instinct was to move beyond the land of rectangles and draw irregular quadrilaterals. I learned that I could use the beginShape() and endShape() functions, with vertices of X and Y coordinates in between, to draw more complicated polygons (see the PShape Shiffman tutorial). Vertices are drawn in a counterclockwise direction starting with the leftmost corner. 

Using this Danny Rozin example from class as a guide, I adapted my original sketch in a similar fashion. The polygon constructor still receives random values for an X and a Y, but instead of using these as the location for the leftmost corner of the shape, they become its centroid. Random numbers are then generated for the calculation of each X and each Y of every new vertex. Those values are stored in a two-dimensional array or matrix and each corner of the shape is created (again in counterclockwise fashion) by subtracting them from or adding them to the X and Y values of the centroid. Though we’ve been working with image matrices all semester, this exercise strengthened my understanding of how I might use them other contexts.

In the previous version of this project, I used the copy() function to grab and duplicate pixels from the canvas into my rects. Danny’s sketch and this tutorial showed me how to load and map parts of the camera feed into my irregular quadrilaterals using texture(). As in the earlier version, I’m still playing with the scale of the newly-mapped areas when doing so. 

In order to employ texture(), I changed the default rendering mode to P3D, which uses OpenGL and increases the overall speed of my sketch.

Part 2: Oh Yeah, Mouse Dragging
Once I converted the shapes, it was time to update the mouse dragging. It was no longer a simple process of checking whether the mouse was within bounds of a plain old rect. After falling down several rabbit holes for a solution, Shiffman recommended two options: drawing all the polygons offscreen as PGraphics objects each with their own unique color to check or try the library, Toxiclibs.js. And then I found a third: Karsten Schmidt, the author of Toxiclibs wrote an example on the Processing forum taking advantage of this point-in-polygon algorithm. It’s possible that I came across this before, but it was too early in the quest for me to understand what I was seeing and what I needed. In any event, I incorporated it into my sketch, using another array to store all of the vertices of each shape, and it worked!

(I remember adjusting the offSet calculations, too. In my original version, offSets for X and for Y where based on the center of each rect (width/2 and height/2), but now that I had a centroid, it was easy to just replace those with the X and Y centroid values.)

Part 3: Adding Another Camera
One camera is great, but with two webcams I can show the viewer dual angles of themselves simultaneously. So I created a new class in my Processing sketch to feed the second camera into a new batch of polygons. The code is nearly identical except the part of declaring a different camera in the texture() function. Gosh, I’m repeating so much code! Therefore, there MUST be a way to rewrite it more efficiently. I haven’t wrapped my head around how to do that just yet...it awaits as a very fun problem to solve. (I’m also still wondering if I might want to treat the camera feeds differently, and at least for right now, keeping them separated gives me the space to think about the possibilities.)

Of note, I took heed of Danny's suggestion to use different camera models to avoid conflicts and headaches: I'm using a Logitech C920 and C922

Part 4: Taking Advantage of ITP's Touch Screens
It's one thing to develop within the comfort of your own laptop environment, but I envisioned this to display on a touch-screen for users to play with their own images. First I messed with the screens along the hallway to the lounge, quickly realizing that any new piece of hardware in the mix requires it's own special attention with regards to the logistics of wire connections as well as finding the right resolution to agree with my Processing canvas size. But those screens are out of reach for kids, and I so turned my focus to the large vertical-hanging Planar monitor near the elevators in ITP's lobby. It took some wire wrangling, HDMI port-finding on the back, installing of drivers, and help from Marlon, but we got it to work on one of the ER's MSI laptops, which was waaay faster than my Mac.

Instead of changing the laptop display to portrait mode to fit the new screen as I began to do, Danny offered the BRILLIANT suggestion of simply rotating the actual cameras and attaching (with tape for now) to the sides of the monitor. For my final Pixel by Pixel iteration, I attached one camera to the side of the screen and mounted the other on a tripod behind the user. Both camera feeds streamed onto the display, but to be honest, the "back" version wasn't as noticeable. For the Spring Show version, I plan to mount both cameras on opposite sides of the screen; folks want to find their faces so give them more of those, I will!

Part 5: Finishing Up & Conclusions - Is it a Mirror, Painting, or Portrait?
Standing before the screen, one's image is elusive: the polygons act like jumbled lenses, and it becomes a puzzle to assemble yourself. Distorting the face and figure is endless entertainment for me, but it's possible shapes are too sharp. Their jaggedness, however, is softened by the fading painterly effect of the combined blend modes.

Shortly after I began this project, I started saving screenshots to document my process and not long after, considered it a type of photo booth. Without this feature, it's simply a funny mirror, albeit one with plenty more options to alter your appearance. Since it's migration to the big monitor, I incorporated a double-tap feature to save the screen sans keyboard using this old forum post as a reference. Processing tells me that the mouseEvent, getClickCount(), is deprecated, but hey, it still works.

Thanks!
Thank you so much, Marlon! I'm very grateful for your time with the Planar. Also to Danny Rozin for the inspiration and to Dan Shiffman for checking in on his way down the hall. And to my many wonderful and talented classmates who stopped by watch, play, chat, and give advice: I am in awe of you everyday. 

Code on GitHub
Portraits created at the ITP Spring Show 2018 @paintingmirror

Week 13: American Landscapes

Trash. We toss it into the bin, to the bag, to the curb, and then to who knows where? But we do it everyday, and I was curious to see the dumping grounds. What do they look like? Where are they? How much do they contain?

So I built a tool to help me visualize some of the country's solid waste landfills, around 2,450 of them. Using the Google Maps API and data sourced from the Landfill Methane Outreach Program (a voluntary EPA program that identifies potential sites for the recovery of methane emissions as a renewable energy source), I launched an Instagram bot on Earth Day, @americadumps, to share satellite images of each site along with its state, latitude and longitude, whether it's open or closed, and its trash by the tonnage. More on the why and the how (including the code) in last week's post here.

Instead of try to read all the images at once in an image editor like Lightroom, I choose Instagram because the platform is designed specifically for pictures with corresponding captions. Setting my own delivery pace allows me to spend a deliberate amount of time with each image and share them with others in the process.

Distortion accompanies any type of representation, especially and including visual ones, and certainly we've discussed in this class how satellite imagery is processed and stitched together. In spending considered time with these landfill views (as of this writing there are 416 posts) in conjunction with their stats, I appreciate how utterly flattening the landscape is collapsed into the picture space. After the flattening, I noticed the numbers. What does it mean to cover 500,000 tons of waste? 1,00,000 tons? or even 100,000,000 tons? These are awesome figures: how does one begin to understand solid waste management at this colossal scale? The views from Google Maps isolates the sites from the very communities they serve, and so my focus became: how are the landfills situated in the landscape in relation to their communities?

From the records with available data, I decided to make some pictures of America's ten largest landfills, again as tool to continue my learning process.* After trying Google Earth Pro and Bing, I visited Google Maps through my browser and employed the tilt feature to reveal the horizon and to try suggest some depth. Again it's a great distortion, but I do appreciate Google's atmospheric perspective qualities to help me attempt these points of view. I aimed to locate each landfill with a sightline to the nearest metropolitan area. If lucky, I found a baseball diamond or tennis court in the foreground to provide a sense of scale.

Following up on each site through YouTube videos and actual photographs impresses upon me the enormity of these structures, and I'm finding ever-more questions to explore, including how to scale the data so that it's human-relatable. For example, how many trash bags fit into a garbage truck? How many tons do trucks transport? How many tons are deposited in a day, a week, a year?

Six of the locations are closed or soon-to-be, and my next questions also include what happens next? Where does the trash go now? How are the old sites maintained--what about those methane emissions and the prevention of groundwater contamination from leachate? Of the closed sites posted so far on Instagram, I looked up a few to find conversions into parks for humans, environmental restorations for local flora and fauna, and developments into golf courses, resorts, and even an airport. The story continues...

*A reminder for future Ellen that you made this decision after revisiting the splendid works of the celebrated American landscape painter, Thomas Cole, at The Met. Some of his paintings preserve an idyllic version of the wilderness and others foretell environmental damage due to encroaching development. 

@americadumps
Code on Github
View Slides

Related References:
Mapping America's Mountains of Garbage
Land of Waste: American Landfills and Waste Production

Week 12: Word Ninja

Playtesting was a blast, and playing the game, which went quite fast for some groups, nodded towards familiar themes from the class, such as who leads and who follows? There's an awkwardness to negotiate as players try to authentically contribute to the conversation while sneaking in their words. Successful strategies included talking fast, first, and often, as well as responding to everything. Overall it forced us to consider what makes for good conversation.

Problem to Solve 1: Increase the Challenge
As always we received useful feedback to consider for our next iteration and final presentation. A key takeaway from in-class playtesting was it was too easy to cheat. As Hadar mentioned, we needed to develop an “element of danger around the saying words.” 

In our post-class meetings, we considered many options, but in the end, incorporated the following constraints: 

Converse naturally but keep the conversation focused a randomly-provided topic, now displaying once the game starts. Options include: career, politics, money, food, religion, music, hobbies, family and relationships, travel, school, and environment. We feel the topics are balanced between easy-going and possibly-more sensitive areas of discussion. Word lists, unrelated to the topics, were randomly generated and hard-coded into the JSON file. 

Players can catch others sneaking words, but at the risk of a penalty if they’re wrong. If one player challenges another, the challenged player must reveal their previous word, now programmed to appear in gray at the top of their screen (they can cover their current word with their hand). If the challenger is right, then they tap to their next word. If wrong however, then the other player taps to their next word.

Problem to Solve 2: Duplicate Words on Mobile Devices
Fortunately a few fellow ITPers were ready for a game break after we implemented the new conditions. It was quite successful, in that instead of racing through each round, participants took their time conversing. On several occasions, however, Dan reported that after a word tap, his new word would appear twice on his phone: once at the bottom AND also at the top, now reserved for the previous word. After nearly an hour and half of troubleshooting, we confirmed that the issue was on the input side; the server was indeed sending out one word at a time as intended. Not only that, but we were only able to reproduce the problem on mobile devices. We eventually determined (thanks to this recall from ICM to debug mobile Safari) that screen locking was forcing a disconnect from server and upon reconnect, any new words were duplicating in the input device's word array. The solution? For now, just tell players to disable auto-lock on their phones before the game start. 

For Future Development: Consider implementing a dictionary API to call for random words. Currently our game can be played for as many times as we have separate word lists, which right now is eleven. 

Go ahead, be a Word Ninja!
Play and remix on Glitch

 A quick demo: output screen above with two inputs below. Imagine each input is a mobile device and hidden from other other players. Try to sneak the word in white into the conversation. Once accomplished, tap for your next word. 

A quick demo: output screen above with two inputs below. Imagine each input is a mobile device and hidden from other other players. Try to sneak the word in white into the conversation. Once accomplished, tap for your next word. 

Feedback from guests and classmates during final critique: 
Reminded our visitors of Taboo and activities in the party game genre. Jellyvision Games also mentioned.

The instructions to get up and running (especially for our class visitors) could be more clear. It was suggested to put them on the "Play" screen.

There is confusion if a word challenge issued before the challengee taps to their next word. Players wanted to know how that person could "go back" or pick up a penalty word. We assumed that users would have already tapped to their next word and did not foresee this situation.

The conversation felt forced and awkward (oooh, but we like this aspect!), even with a topic. More constraints were suggested to structure the discussion. Maybe everyone tells a story together, each person adding on a sentence? Maybe the game is to say a sentence and have everyone guess the word.

The content of the conversation was inconsequential. It was also suggested to pose more specific, perhaps polarizing topics to encourage folks to express their opinions. 

Consider increasing the difficulty of the words as the game continues. 

Users reported feeling no pressure to say their words. Perhaps include a timer or add words after a certain amount of time. Perhaps provide feedback on the output screen indicating when a player (maybe anonymously) has one word remaining. Also refresh screen every 10-15 seconds to publish the already-spoken words.

Anthony mentioned that calling out someone on their word disrupted the flow of the conversation. Maybe there is a way to do this through the mobile interface instead?

Thank you, everyone!

Week 12: America Dumps

During my Vintage Fountains project, I enjoyed the anticipation and reveal of each new fountain. Sure I could visit the image results page of any search engine, but with that project I found myself spending time with each individual picture, considering the life of the object(s) pictured and the photographer’s decisions. I enjoyed the extremely slowed-down, one image-at-a-time pace. But that was on Twitter, and right now Instagram rules the photo sharing scene.

I’ve been on Instagram for two weeks now learning how to scrape images from public hashtag pages, only to remix and throw them right back from whence they came (see @autoechoes). In the process of playing, I observed content from some of more popular hashtags. There’s a tag for nearly everything and plenty of skin, faces, food, and camera beautiful landscapes and lifestyles. I found the quantity of posts astonishing: at the time of this writing, over 341 million in #selfie, 435 million in #happy, 520 million in #photooftheday, 746 million in #instagood, and 1.2 billion in #love. It’s a positive place, this Instagramland. (By comparison only 870,000 in #unhappy and 24 million posts in #sad.) I found the likes and followers an alluring distraction (apparently for some the temptation is too great). I asked friends and colleagues about their Insta experiences. Many shared pics to connect with friends and family and/or to participate in threads related to interests and hobbies. Some commented on self-branders and corporate marketing strategies.

After a week it all started to look about the same (smiley, centered, saturated, squared), and I started to wonder about what I was not seeing. If so many people are using this platform (are you up to one billion, yet, Instagram?), then could it be used to bright to light places far removed from folks' like-radars? Places rarely sought out in real life, much less shared online for followers. Like landfills, for example. Waste of all kinds is universal. Humans have been burying (and sometimes building on top of) their trash for thousands of years. It’s one of the hallmarks of civilization. Why don’t we discuss it more, specifically about how it allows society to function…or in the emergence of Anthropocence, maybe eventually not so well? Is there an unsustainable cost to coveted #lifestyles?

Launched in honor of Earth Day, @americandumps posts satellite views of some 2,450 solid waste landfills in the United States. Included with each image is its state, latitude and longitude, whether it's open or closed, and the amount of waste in place* in tons. All data was sourced from Google Maps and the February 2018 Data Files from the Landfill Methane Outreach Program, a voluntary EPA program. LMOP “works cooperatively with industry stakeholders and waste officials to reduce or avoid methane emissions from landfills” by “[encouraging] the recovery and beneficial use of biogas generated from organic municipal solid waste. According to their database, the total amount of tonnage for sites in which that data is available, is currently over 11 billion tons of trash.

My project uses two scripts: one to retrieve the satellite image of each site and the other to upload it to Instagram. A bit about my process (all code linked below): 

  1. After retrieving the LMOP data, I added my own ID field, changed state abbreviations to full names, removed spaces from those full names to prep them for the hashtags, and duplicated the latitude and longitude columns, inserting “Data Missing” into the empty fields (also for Instagram caption display). Afterwards, I formatted the file as CSV and then converted it to JSON
  2. Next, I wrote get_images.py to iterate through each landfill record in the JSON file and call the Google API with the latitude and longitude coordinates of each site. 
  3. With that working, I downloaded all of the images at two different zoom levels, 15 and 16, to compare. Though I prefer the detail at zoom 16, sites are less likely to get cropped at 15. In addition, 15 provides greater context, showing how each landfill is situated within the landscape and its size compared to any surrounding community.** 
  4. Then, I built upload_images.py to retrieve each satellite image and post it to Instagram along with corresponding information. Hashtags were chosen because of their relevance and popularity: combined their total posts sum to over one billion. 

Of note, I came across this error early during upload testing:

Request return 400 error!
{u'status': u'fail', u'message': u"Uploaded image isn't in the right format"}

Turns out that Instagram refused photos straight out of Google Maps. Somehow it occurred to me to try opening and saving the images as new files using the Pillow library in get_images.py, and that did the trick.

*This report defines waste in place “as all waste that was landfilled in the thirty-year period before the inventory year, calculated as a function of population and per capita waste generation.” 
**Unfortunately I forgot to switch the zoom back to 15 until 157 images were posted.

@americandumps
Code on Github

Week 11: Final Project Progress

Sometimes you work really head only discover that you need to take a hard left and veer from your current course. That’s what happened this week for our Collective Play final project. 

Immediately after our previous class we got to work. We realized that it was not an intimate relationship that were were necessarily interested in cultivating through our human-only interaction tests. Instead we were curious as to how the uninterrupted gaze (usually reserved for significant people in our lives) seemed to evoke stronger feelings of connection. This felt like an important point to articulate. 

We then playtested with ourselves and folks on the floor to work out some questions. We compared looking at one another in person versus over FaceTime in different rooms. Over the computer never fully delivered in the same way. Not a surprise after the fact, but it was useful to run the experiment and dissect why. (Sometimes when you're so zoomed into a problem to solve, you miss the obvious.) First, it was a struggle to line up the camera angles to align your gaze with the other person, and we never quite achieved an exact match. Second, aside from the missing sensory sensations, you’re not fully there for the other person when most of your body is hidden. Overall it lacked the nervousness or excitement of the IRL activity. From two folks from the floor who had never spoken in person, one participant reported that she felt like she was looking at a picture. The other mentioned that he saw eye contact as an invitation to converse if you don’t know the other person well. But if you do know them, then extended contact carries other meanings. Also, it just wasn’t…well, fun. 

All of this helped us to remember our core ideas: we’re interested in cultivating connection between people, and to this end, we deemed it useful to design an activity that included a goal and the inability to hide over an extended amount of time.

And then we took a break for a few days to reevaluate our next steps.

Totally stuck and uninspired by our current direction, I decided to focus on the missing verbal conversation. For our next iteration, I’m somehow stumbled upon and/or remembered The Tonight Show’s Word Sneak game. The idea is to sneak your words (unknown to other players) into the conversation as “as casually and seamlessly as possible.”  We played it ourselves (with words from a random generator) during our next group meeting, and then riffed off of it, giving it our own twists. In our current version, the first person to use their words wins. I’ve only seen the game played between two people on the show. We wondered what might happen to the conversation dynamics if more people played. Would people be civil and take turns or dominate the conversations with their interjections? Would they draw from personal experience and/or just make it up as they went along? The television show version makes use of random words, but what if we provided participants with words from a charged theme that might push against people's boundaries--we thought of many topics: money, religion, politics, job/career, family relationships, stereotypes, death, and dreams/hopes/disappointments.  

With it’s open-ended nature, you never know where the conversation will take you. After we built the underlying socket framework to play on our phones (once the game starts you tap for your word), we played several times with themed and random word lists and learned more about each other each time. In a way it relates to our previous work in that it requires players to be fully present and engaged, attentively looking and listening to one another to keep up with the conversation. When we played with a topic-themed list, family relationships, the conversation got personal very quickly. As I became invested in what my group member shared, I got concerned that my contributions might appear disingenuous because of the incentive to use my next words. 

During our playtesting session this week, we hope groups get through two rounds: the random list and a then a "serious" topic list. We expect quite different feedback from both.

Week 11: Impossible Representations

 Screenshot from  Esri's Satellite Map

Screenshot from Esri's Satellite Map

D’Ignazio’s 2015 article, "What Would Feminist Data Visualization Look Like?", gets right to the heart at why maps are so darn impossible. Her design proposals for “more responsible representation” in data visualization ask us to consider how to make known the uncertainties, the missing, as well as the flawed methods and tools employed in their construction. She calls for a display of the motivations and decision-making process, as well. Finally, she asks how representations might allow for interrogation—in the context of this class: how can we make make maps that are fluid and capable of presenting “alternative views and realities?” 

She describes interactive data visualizations as “currently limited to selecting some filters, sliding some sliders, and viewing how the picture shifts and changes from one stable image to another as a result.” Again considering this class, how can we imagine maps to be more interactive than providing similar cosmetic choices (perhaps a better word to use is, reactive). I’ve enjoyed considering interactivity during my studies at ITP, and for me it it goes beyond pressing a button to light a LED or walking back and forth in front of a responsive projection of myself. Interactivity is a medium of expression, and its meaning arrives out of connecting and engaging with others. So then how can we imagine maps to be interactive? (Wait, does my Waze app count? Maybe. I would call it more useful than meaningful or expressive, though.)

Speaking of challenging the representation of data, this first reading was a useful primer for the next: part of an introduction to Kurgan’s book, Close Up at a Distance: Mapping, Technology, and Politics, in which she illuminates some historical motivations and work behind the production of satellite imagery, noting that they are “come to us as already interpreted images, and in a way that obscures the data that as built them.” Though we’re now used to seemingly seamless high-resolution imagery of our Google Earth globe, it’s really a collage of heavily rendered composite photographs from a variety sources and rendering methods. Renderings that are as much scientific as they are artistic. There is always a distortion in any representation, and certainly in any image (ah, the impossibility of photographs). The question is how to somehow deliver the metadata and decisions with the imagery to help viewers understand their own interpretations in the context of those distortions. 

For my final project, I’m interested in working more with satellite imagery, but in what capacity I’m currently unsure. Curious, I found a map of the satellites orbiting the planet. There are A LOT, nearly 17,000 with more on the way. I also wondered about the breakdown between the artistic and scientific processing of satellite imagery. I read the mentioned Charlie Loyd article, “Processing Landsat 8 Using Open-Source Tools, which describes many decisions for processing satellite imagery (brightness, contrast, adjusting midtones and individual color channels, and sharpening), similar to developing digital photos in Photoshop or Lightroom and not objective. (What exactly are the scientific aspects of this type of processing anyway?) Making my own map tiles seems ambitious to tackle, but it's fun to consider covering the world in imagery of my own design. A conversation with a classmate sparked ideas about collecting and comparing imagery from different periods in time over the Rio Grande (ooh, representations of nation state borders). Finally, a random but connected thought: so how long until we get to see live global satellite video? What happens (has it already?) when you can’t commune with nature alone because of the eyeballs above? You can already watch a livestream from the International Space Station here (and screenshots from 4/13/18 at 1:30am below). I'm looking forward to considering all of this further!

Week 10: Human-Only Playtesting (Part 2)

In our playful but intense in-class improv activities this week with a local Tisch experimental theater professor, we engaged in lengthy non-verbal eye-gazing sessions with three different people. Lin and I paired for one of those, and I can certainly report having a much better sense of her now than before the exercise. It’s difficult to pinpoint how exactly, but there is a difference. I feel I “know” her better. This puzzling experience inspired a lengthy conversation after class with both Lin and another classmate, Maria, and we decided to team up to further explore eye communication as a possible entryway into our final project.

Lin and I designed a human-only interaction (Maria ran a separate one due to her schedule) that turned out to address some of my lingering questions from last week and present some new ones. We structured the new interaction with  time limits and specified goals. Unlike last week, however, it included (a couple) more participants and directed participants differently depending on their role. 

Here’s how it went. First, we found four people. One person was seated across from the other three and given a description of game with two parts. In the first part, they could choose to look at any of the folks across them for a certain amount of time that we, the observers, did not disclose. A ringing bell would mark the end of that time and a transition into the second part of the game, during which they could choose to do and look at or whomever they decided—whatever made them feel comfortable, again for a undisclosed amount of time. The other three players, clueless to what the fourth person was told, were instructed to compete for attention of that individual during part one. In part two, they were instructed to not look at anyone at all. All participants were asked to refrain from talking. When the “game” started Lin set a timer for 20 seconds only after the two people (in this case, Lu and Roland) commenced into their extended gaze; the second part lasted for around three minutes. We were curious about several areas: What happens when the attention you just received is denied? What happens when you do not receive any attention? And of course, what happens to you during a longer-than-normal non-verbal eye chat? 

We specifically chose classmates from the floor who we thought might be open to try a vaguely-explained staring contest experience. While we recognize that different players might have provided very different accounts, our post-game discussion with and without the participants nevertheless yielded useful questions for further consideration. Roland reported that though it’s not socially-acceptable to stare at people, especially women, and that at the beginning of his stare with Lu felt long, at a certain point it was quite pleasant. Lu agreed and noted that eye contact acquires different meaning when accompanying verbal conversation. Shreiya and Steven both missed out on Lu’s attention and mentioned feeling confused and defeated…although Steven felt a brief sense of comfort when he and Roland momentarily glanced at each other during part two. I honestly did not expect our non-Collective Play classmates to accept the task we gave them so well, and their amiable reactions post-play impressed on us the significance of making connections with others.

I’m simplifying a lot here, but it led to a much longer discussion between Lin and myself about how to intimacy is forged between strangers. Aside from staring into someone’s eyes, what other norms might we disrupt/reverse/challenge? What if people fed one another during dinner? What if we investigated prolonged invasions into personal spaces? What if we encouraged lengthy periods of touch (hand-to-hand, hand-to-should, etc.)? Keys to all of these scenarios are extended durations of time and the inability to hide.

We ultimately wondered if it was possible to design a digital space to foster sustained connections or intimacy between those who are otherwise unfamiliar with one another. Keeping it simple, we envisioned a scenario in which two people in different physical spaces each sit in front a computer screen that initially displays a crop of the other's eyes. Continuous looking at the screen causes small sections to fade away and uncover each person's face. If either turns away, all that was revealed is hidden again for both participants. Would curiosity to fully see the other keep folks seated, attentive, and engaged in "slow looking" until the both faces are fully unveiled? It's a case that potentially rewards being with someone in the moment, something that we usually do not equate with technology-mediated spaces.