Week 11: Final Project Progress

Sometimes you work really head only discover that you need to take a hard left and veer from your current course. That’s what happened this week for our Collective Play final project. 

Immediately after our previous class we got to work. We realized that it was not an intimate relationship that were were necessarily interested in cultivating through our human-only interaction tests. Instead we were curious as to how the uninterrupted gaze (usually reserved for significant people in our lives) seemed to evoke stronger feelings of connection. This felt like an important point to articulate. 

We then playtested with ourselves and folks on the floor to work out some questions. We compared looking at one another in person versus over FaceTime in different rooms. Over the computer never fully delivered in the same way. Not a surprise after the fact, but it was useful to run the experiment and dissect why. (Sometimes when you're so zoomed into a problem to solve, you miss the obvious.) First, it was a struggle to line up the camera angles to align your gaze with the other person, and we never quite achieved an exact match. Second, aside from the missing sensory sensations, you’re not fully there for the other person when most of your body is hidden. Overall it lacked the nervousness or excitement of the IRL activity. From two folks from the floor who had never spoken in person, one participant reported that she felt like she was looking at a picture. The other mentioned that he saw eye contact as an invitation to converse if you don’t know the other person well. But if you do know them, then extended contact carries other meanings. Also, it just wasn’t…well, fun. 

All of this helped us to remember our core ideas: we’re interested in cultivating connection between people, and to this end, we deemed it useful to design an activity that included a goal and the inability to hide over an extended amount of time.

And then we took a break for a few days to reevaluate our next steps.

Totally stuck and uninspired by our current direction, I decided to focus on the missing verbal conversation. For our next iteration, I’m somehow stumbled upon and/or remembered The Tonight Show’s Word Sneak game. The idea is to sneak your words (unknown to other players) into the conversation as “as casually and seamlessly as possible.”  We played it ourselves (with words from a random generator) during our next group meeting, and then riffed off of it, giving it our own twists. In our current version, the first person to use their words wins. I’ve only seen the game played between two people on the show. We wondered what might happen to the conversation dynamics if more people played. Would people be civil and take turns or dominate the conversations with their interjections? Would they draw from personal experience and/or just make it up as they went along? The television show version makes use of random words, but what if we provided participants with words from a charged theme that might push against people's boundaries--we thought of many topics: money, religion, politics, job/career, family relationships, stereotypes, death, and dreams/hopes/disappointments.  

With it’s open-ended nature, you never know where the conversation will take you. After we built the underlying socket framework to play on our phones (once the game starts you tap for your word), we played several times with themed and random word lists and learned more about each other each time. In a way it relates to our previous work in that it requires players to be fully present and engaged, attentively looking and listening to one another to keep up with the conversation. When we played with a topic-themed list, family relationships, the conversation got personal very quickly. As I became invested in what my group member shared, I got concerned that my contributions might appear disingenuous because of the incentive to use my next words. 

During our playtesting session this week, we hope groups get through two rounds: the random list and a then a "serious" topic list. We expect quite different feedback from both.

Week 11: Impossible Representations

 Screenshot from  Esri's Satellite Map

Screenshot from Esri's Satellite Map

D’Ignazio’s 2015 article, "What Would Feminist Data Visualization Look Like?", gets right to the heart at why maps are so darn impossible. Her design proposals for “more responsible representation” in data visualization ask us to consider how to make known the uncertainties, the missing, as well as the flawed methods and tools employed in their construction. She calls for a display of the motivations and decision-making process, as well. Finally, she asks how representations might allow for interrogation—in the context of this class: how can we make make maps that are fluid and capable of presenting “alternative views and realities?” 

She describes interactive data visualizations as “currently limited to selecting some filters, sliding some sliders, and viewing how the picture shifts and changes from one stable image to another as a result.” Again considering this class, how can we imagine maps to be more interactive than providing similar cosmetic choices (perhaps a better word to use is, reactive). I’ve enjoyed considering interactivity during my studies at ITP, and for me it it goes beyond pressing a button to light a LED or walking back and forth in front of a responsive projection of myself. Interactivity is a medium of expression, and its meaning arrives out of connecting and engaging with others. So then how can we imagine maps to be interactive? (Wait, does my Waze app count? Maybe. I would call it more useful than meaningful or expressive, though.)

Speaking of challenging the representation of data, this first reading was a useful primer for the next: part of an introduction to Kurgan’s book, Close Up at a Distance: Mapping, Technology, and Politics, in which she illuminates some historical motivations and work behind the production of satellite imagery, noting that they are “come to us as already interpreted images, and in a way that obscures the data that as built them.” Though we’re now used to seemingly seamless high-resolution imagery of our Google Earth globe, it’s really a collage of heavily rendered composite photographs from a variety sources and rendering methods. Renderings that are as much scientific as they are artistic. There is always a distortion in any representation, and certainly in any image (ah, the impossibility of photographs). The question is how to somehow deliver the metadata and decisions with the imagery to help viewers understand their own interpretations in the context of those distortions. 

For my final project, I’m interested in working more with satellite imagery, but in what capacity I’m currently unsure. Curious, I found a map of the satellites orbiting the planet. There are A LOT, nearly 17,000 with more on the way. I also wondered about the breakdown between the artistic and scientific processing of satellite imagery. I read the mentioned Charlie Loyd article, “Processing Landsat 8 Using Open-Source Tools, which describes many decisions for processing satellite imagery (brightness, contrast, adjusting midtones and individual color channels, and sharpening), similar to developing digital photos in Photoshop or Lightroom and not objective. (What exactly are the scientific aspects of this type of processing anyway?) Making my own map tiles seems ambitious to tackle, but it's fun to consider covering the world in imagery of my own design. A conversation with a classmate sparked ideas about collecting and comparing imagery from different periods in time over the Rio Grande (ooh, representations of nation state borders). Finally, a random but connected thought: so how long until we get to see live global satellite video? What happens (has it already?) when you can’t commune with nature alone because of the eyeballs above? You can already watch a livestream from the International Space Station here (and screenshots from 4/13/18 at 1:30am below). I'm looking forward to considering all of this further!

Week 10: Human-Only Playtesting (Part 2)

In our playful but intense in-class improv activities this week with a local Tisch experimental theater professor, we engaged in lengthy non-verbal eye-gazing sessions with three different people. Lin and I paired for one of those, and I can certainly report having a much better sense of her now than before the exercise. It’s difficult to pinpoint how exactly, but there is a difference. I feel I “know” her better. This puzzling experience inspired a lengthy conversation after class with both Lin and another classmate, Maria, and we decided to team up to further explore eye communication as a possible entryway into our final project.

Lin and I designed a human-only interaction (Maria ran a separate one due to her schedule) that turned out to address some of my lingering questions from last week and present some new ones. We structured the new interaction with  time limits and specified goals. Unlike last week, however, it included (a couple) more participants and directed participants differently depending on their role. 

Here’s how it went. First, we found four people. One person was seated across from the other three and given a description of game with two parts. In the first part, they could choose to look at any of the folks across them for a certain amount of time that we, the observers, did not disclose. A ringing bell would mark the end of that time and a transition into the second part of the game, during which they could choose to do and look at or whomever they decided—whatever made them feel comfortable, again for a undisclosed amount of time. The other three players, clueless to what the fourth person was told, were instructed to compete for attention of that individual during part one. In part two, they were instructed to not look at anyone at all. All participants were asked to refrain from talking. When the “game” started Lin set a timer for 20 seconds only after the two people (in this case, Lu and Roland) commenced into their extended gaze; the second part lasted for around three minutes. We were curious about several areas: What happens when the attention you just received is denied? What happens when you do not receive any attention? And of course, what happens to you during a longer-than-normal non-verbal eye chat? 

We specifically chose classmates from the floor who we thought might be open to try a vaguely-explained staring contest experience. While we recognize that different players might have provided very different accounts, our post-game discussion with and without the participants nevertheless yielded useful questions for further consideration. Roland reported that though it’s not socially-acceptable to stare at people, especially women, and that at the beginning of his stare with Lu felt long, at a certain point it was quite pleasant. Lu agreed and noted that eye contact acquires different meaning when accompanying verbal conversation. Shreiya and Steven both missed out on Lu’s attention and mentioned feeling confused and defeated…although Steven felt a brief sense of comfort when he and Roland momentarily glanced at each other during part two. I honestly did not expect our non-Collective Play classmates to accept the task we gave them so well, and their amiable reactions post-play impressed on us the significance of making connections with others.

I’m simplifying a lot here, but it led to a much longer discussion between Lin and myself about how to intimacy is forged between strangers. Aside from staring into someone’s eyes, what other norms might we disrupt/reverse/challenge? What if people fed one another during dinner? What if we investigated prolonged invasions into personal spaces? What if we encouraged lengthy periods of touch (hand-to-hand, hand-to-should, etc.)? Keys to all of these scenarios are extended durations of time and the inability to hide.

We ultimately wondered if it was possible to design a digital space to foster sustained connections or intimacy between those who are otherwise unfamiliar with one another. Keeping it simple, we envisioned a scenario in which two people in different physical spaces each sit in front a computer screen that initially displays a crop of the other's eyes. Continuous looking at the screen causes small sections to fade away and uncover each person's face. If either turns away, all that was revealed is hidden again for both participants. Would curiosity to fully see the other keep folks seated, attentive, and engaged in "slow looking" until the both faces are fully unveiled? It's a case that potentially rewards being with someone in the moment, something that we usually do not equate with technology-mediated spaces.

Week 10: Not-So-Super Scenery Along the Mississippi

Pleased with the disorienting outcome of my Mississippi-only map last week, I was challenged to consider how I might develop it this week. Our reading had me thinking about how “non-human Others” map their version of the world and spending time with the referenced interactive web documentary, Bear 71, keyed me into how humans-made features obstruct (often fatally) passage through their environments. That experience highlighted the physical structures to which animals must awkwardly adapt, and I started wondering about toxic chemical deposits in land areas that might, as least from the outside and to human eyes, look relatively safe. Curious if and how many of these sites exist along the banks of the Mississippi River, I embarked on a scavenger hunt for a dataset of national Superfund sites, which according to the U.S. Environmental Protection Agency, are contaminated sites that exist in the thousands “due to hazardous waste being dumped, left out in the open, or otherwise improperly managed.” The “EPA’s Superfund program is responsible for cleaning up some of the nation’s most contaminated land and responding to environmental emergencies, oil spills and natural disasters.” (Why oh why is it informally called Super?!) 

I also wanted to get my hands dirty wrangling with data not of my own making. The EPA provides static and dynamic lists of Superfund sites online (check if there are any close to where you live here) but no JSON-formatted versions that I could find at data.gov. Next, I came across a relatively-recent dataset on Kaggle scraped from the EPA’s National Priorities List (NPL), which includes Superfund site names along with their latitude and longitude coordinates. The JSON format of the file, while valid, did not look familiar. I attempted to convert it to CSV with the intention to then convert to geoJSON, but as the file was 10MB it kept hanging in online-converter tools, even on those claiming to handle large file sizes. Fortunately, although not optimal, I found this map created in 2014 also from the NPL data that allowed me to download the data points in GeoJSON form. 

From there it was a matter of figuring out how to incorporate another (in fact, a third) source and layer of data into my map. While this data covers all of the United States, I decided to maintain the minimalistic design and only reveal the contaminated areas while zoomed in to the river. (Okay okay, you can zoom out slightly with your mouse to survey more if you want.) Our early in-class earthquakes example helped me draw and paint circles for each Superfund site, and this Mapbox GL JS tutorial, provided another way to add popups upon mouse clicks to those points. So now cobbled together with my version from last week, my map deals with data in two ways—embedded in the HTML itself and also loaded from external files. It also handles the pop-up styling differently—one through CSS and the other directly in the HTML code. Probably not ideal, EXCEPT if you’re in the  beginning stages of learning how it all works! And now it all makes sense to me.

I’ll add that installing the pretty-json ATOM package was essential in helping me identify the fields to pull into the Superfund popups. Of note, sites are of three categories: currently on the National Priorities List, proposed for the list, or removed from the list. I color-coded their circles accordingly and embedded that information into the popup along with a link to learn more about the site and EPA/community cleanup efforts. Ideally, I would prefer to include this information within each marker’s popup (and that Kaggle file provided lengthy site descriptions) instead of shooting users offsite and out of my map world. 

While the Superfund site names sometimes disclose the current State's name and reveal the user's current position along the river, this additional information opens up all sorts of questions about what we can't see about the land from satellite photographs or from an IRL leisurely riverboat cruise.

Not-So-Super Scenery Along the Mississippi

Week 9: Human-Only Playtesting (Part 1)

I'm inclined to incorporate imagery or video of human faces and/or figures into a final project for Collective Play, although at this point I have no idea as to how or for what outcome. However, this kind of imagery is meaningful (especially if it's of yourself), recognizable, and full of expressive potential. So for this week's preparatory exercise to playtest a human-only interaction, I attempted to move towards this general direction and prepared an activity requiring participants to intentionally observe one another and move their bodies the entire time.

Inspiration for my game came from Augusto Boal's, Games for Actor and Non-Actors, the first three parts of his mirrors sequence in particular. I ran the event on four separate occasions with different pairs of peers. Each time, I asked partners to start by facing one another and explained that one person would be the mirror image of the other, imitating with as much accuracy as possible any facial expressions or movements in their partner and all without talking. Following Boal, I suggested that anyone looking in on the activity should not be able to tell who was leading or  following. The goal was not trip each other up, but to see if they could move in sync. That was part 1. In part 2, partners swapped roles such that the mirrors were now leading. Finally in part 3, I instructed participants to perform both roles simultaneously: they were free to move in any which way but they were also to follow their partner. (This last round reminded me of the speaking-one-line-at-the-same-time activity in class a few weeks ago.) After each session, I asked my peers to jot down their feelings and what they noticed during the different stages. 

With this event, I was curious to note how long it took for folks to express boredom, whether they maintained eye contact the entire time, and by leaving it open-ended, what choreography they discovered together, especially during the third round when the leader/follow roles were ambiguous. I hoped to learn more about the emotional dynamics at play from the feedback I collected at the end. Here's what I found: A desire to move on to the next stage, which I interpreted as an expression of boredom, was always expressed by the leader somewhere between the 45-second and 2-minute mark. Overall, my unscientific takeaway from observing and reading the comments was that it was easier for the mirrors to follow along rather than deal with the pressure of continually coming up with new moves (some expressed anxiety about this). But having each person take turns in the roles was useful practice for the final syncing stage, which was perhaps the most challenging (confusing to know who to follow) but the most rewarding, even if they repeated movements from the prior rounds (they could fall back on a previously-created and shared vocabulary). Though I expected partners to continually face each other and stay planted in their same positions throughout, in two of the sessions, I was surprised to see bodies turn and starting moving in all directions through the space. Because of this, eye contact broke (I expected this to be a must to maintain for syncing success), and I noticed folks intentionally trying to make it hard for the other person to follow (which was of course hilariously for all of us). Despite any confusion expressed during or after the game, there were generally smiles, laughter, and a good time shared by all. A HUGE thank you all to all who playtested!

A few keywords:
Leaders - happy, excited, uncertain, nervous, manipulative, bored
Followers - engaged, fun, relaxed, confused (sometimes to flip the movements)
Simultaneous - uncertain, confused, rewarded

For next time, perhaps I'll give players a specific goal or several tasks to accomplish. I'd also like to play around with the timing and pace. What impact might giving a time limit to achieve a particular outcome have on the gameplay?

Week 9: Up the Mighty Mississippi

Discussing the Inuit maps in class last week ignited all sorts of ideas and questions about the forms of maps and why we use them. Including, is every map is simply an extension of the self, radiating outwards, and in the process, locating the individual in the context of that projected space? If so, then all map marks are relational to the map holder: I am here and those points are there, but if am over here, then they are now there. It's a constant conversation. And if (one of) a map's purpose is to navigate one's way through all of the relational data, then is it possible to create an experience that disorients the user? How would that work?

Recently, before this class started, I zoomed into a map of the United States and followed the The Mississippi River along it's entire route (approximately 2,350 miles) using only satellite imagery, no labels. The river's color and width changed all along is winding, northern meander. I tried to identify major cities and the states along the way, but ultimately I lost all sense of context, losing any relation to the starting point and my distance from the headwaters. 

I decided to recreate that event and kept it in mind as I perused all of Mapbox's GL JS features, noting which ones I responded to the most. For this project they were: satellite imagery (of course and also I'm obsessed with the Earth View from Google Earth browser extension) and the various camera view options, specifically flying to a location and centering the map upon each symbol-click. I decided to combine my own markers with this latter functionality so that the user might advance along the river a their own pace.

First, I began building my own GeoJSON dataset using this tool, and then integrated it into my map to test and set the zoom level and pitch just right at the southern-most end of Louisiana, where the mouth of the river meets the Gulf of Mexico. I settled on a zoom level of 12, when the satellite imagery becomes much sharper, more detailed, and clearly stitched together. Pitch introduced perspective, and it took me second to figure out about how far apart to place my markers so that at least one new one would appear in view with each click. As I worked through the process of checking and re-checking my marker placements, it helped me to implement the display of GPS coordinates of my mouse pointer and navigation controls, which I removed for the final version.

So take a journey up the Mighty Mississippi! Like last week's example, I'm offering a type of navigation through imagery. (How might this look with highways? What about animal migration routes?) Depending on your screen's size, it might work best in fullscreen. And in the event that you don't see the next symbol to click, simply scroll yourself up or around the river bit to find it.

Up the Mighty Mississippi

 

Week 9: Vintage Fountains

Learning how to automate Twitter status updates this week inspired me to do so with images. Still a reserved social media user, I had an unused email account lying around that presented the perfect opportunity to practice my scraping skills and poke my head into the Twitterverse. 

To some the background story is familiar: in the spring of 1917, Marchel Duchamp anonymously submitted an artwork titled, Fountain, signed R. Mutt, to the inaugural exhibition of the Society of Independent Artists, of which he himself was a board member. According to show rules, all submissions would show, and all were, except for Fountain, which was deemed not art by the exhibition committee on the account of it being a urinal. This was not Duchamp’s first readymade but perhaps one of his more well known pieces and a hallmark of an emerging conceptual art movement.

My project tweets vintage urinals as R. Mutt at #fountain and #arthistory. The images are randomly selected from DuckDuckGo’s image search results. At first I found some success using Selenium and the method I used for Top Rep$ (finding and moving through elements using XPath), but this returned only the first 50 results--mostly likely a scrolling issue. Indeed, from scrolling down the page and inspecting the last image, I knew that there were ~330 possibilities. Sam reminded me to check the Ajax calls through the browser’s developer console (Network > XHR), and from there I pulled a link within which I found another link that gave me the image sources in JSON-formatted data. I quickly discovered how I could iterate through all of the results by manipulating a value in this URL. 

From there I wrote two scripts: one to search, scan, and download a picture of a vintage urinal, and another to authenticate into and post the photo to @iamrmutt’s Twitter account. The first script stores all image URLs into an array from which a random one is selected, and the associated file is subsequently downloaded to disk. (Update! In retrospect, after running this for a week, I should also store which links are randomly chosen into a text file and check against that to prevent repeat posts.) From the first script, the second script imports the variable containing the filename of the saved photo and then uses the Tweepy library to post it via Twitter’s API. So though I have two scripts, I only have to call one to complete the entire process. (Of note, since I download the photo with the same filename every time, and because my first script will not download an image if the same name already exists, my update status script deletes the file on my local disk after sending it to Twitter to prevent the same image from posting each time.) 

Troubleshooting update! Since I drafted this post, two issues arose. The first was that my requests to the original DuckDuckGo URL stopped working with the keywords, vintage urinal. Plopping it into my browser returning a blank page except for, "If this error persists, please let us know: ops@duckduckgo.com." However, after making it plural, I was back in business...for a while, until that broke, and I changed it to urinals vintage... I also received this Tweepy error twice in a row: "tweepy.error.TweepError: [{u'message': u'Error creating status.', u'code': 189}]." Though I was  able post text-only updates at the time, the issue eventually resolved itself after a couple of hours. (Update on the update: Adding in the vqd number to the request URL allows me search with the original keywords, but I have yet to uncover what this is exactly. Also, I noticed that the Tweepy error occurs when the downloaded image is zero bytes. Could this be because the image no longer exists online?) All good to know for future projects. 

Code on GitHub