Week 7: Top Rep$


Introducing Top Rep$, 115th Congress Edition, an educational card game inspired by Top Trumps to promote awareness of members of the United States Congress and to provoke dialogue about the current state of campaign finance and federal lobbying, with a specific focus on funds received or spent in support of representatives in favor of gun rights legislation. With the continued congressional gridlock on gun control, Top Rep$ is dedicated to the surviving families of our country's most recent mass shooting in Parkland, Florida.

For the assignment to automatically manipulate scraped images or video, I built a tool to help me learn more about Congress and what monies flow into members' campaign offices. I was initially motivated to understand who favors gun rights in a country with more mass shootings than anywhere else. I know that the United States' relationship with guns is complicated, and this data is simply a slice of a much larger story. Nevertheless I'm curious, and it's a place to start. Though I began at the Federal Election Commission, it wasn't long before I discovered the Center for Responsive Politics, an organization that not only analyzes and creates contribution profiles from the FEC's campaign finance data but also publishes reports on a variety of issues, including gun rights, on their website OpenSecrets.org

Similar to the original game, each card in Top Rep$ depicts one person, five categories of numerical values, and some brief additional information. Each card, 531 in all at this current moment, contains thirteen pieces of data. Some scraped and some copied and pasted, I organized all of the data into one Google spreadsheet to populate an InDesign template and generate individual cards for each representative. While the total deck includes both U.S. Senators and House Representatives, it does not include non-voting House members nor does it account for current House vacancies. I collected all information from March 11-13, 2018.

 My representative in the House.

My representative in the House.

Here's a breakdown of each data point and the process by which I retrieved it. All code is posted on GitHub, along with some of the raw data, my final spreadsheet, and the digital card files.

  1. Portraits - Thumbnail images for members of the 115th Congress were scraped from the Congress.gov and converted to grayscale using a subprocess call to an ImageMagick grayscale conversion. Quick to receive a 429 error with a Beautiful Soup start, I switched to Selenium, located the image URL by XPath syntax, and due to inconsistent file naming, retitled each image with each representative's name. A number of members (~40 and mostly recently elected officials) lacked images, so I used this blank person clip art, adding my own neutral gray background. 
  2. Party Affiliation - Republican, Democratic, and Independent converted from the abbreviations, R, D and I, on the OpenSecrets' Gun Rights vs. Gun Control spreadsheet.
  3. Office - Senator and Representative converted from the abbreviations, S and H, on the OpenSecrets' Gun Rights vs. Gun Control spreadsheet.
  4. State - State abbreviations extracted from the Distid field on the OpenSecrets' Gun Rights vs. Gun Control spreadsheet. I also learned how to add a custom function to Google Sheets to convert those State abbreviations into their full names using code from Dave Gaeddert
  5. Names - Collected and compared against several sources: the congress.gov portrait scrape, the OpenSecrets.org directory and their Gun Rights vs. Gun Control data tabulation, and committee assignment information for both chambers (see below).
  6. Committees - Already in table format, I copied and pasted this data from the House of Representatives Committee Information and Senate Committee Assignments. Of note, the Office of the Clerk for the House of Representatives does not list joint committee assignments, but the Senate does. This information was added to House representatives where appropriate using committee websites as source material. Since historical data is provided on the playing cards, it worth mentioning that brand new committees as of 2018 include the Joint Select Committee on Solvency of Multiemployer Pension Plans and the Joint Select Committee on Budget and Appropriations Process Reform.
  7. Years in this Chamber. Again using XPath syntax I scraped "Years in Service" from the directory at Congress.gov. For some reason about which I'm still unclear, I was unable to pull data from both chambers at once, so instead I wrote two scripts, filtering the chamber in the initial url request. To calculate the total number of years in the spreadsheet, I extracted the base year from the string and subtracted it from 2018. Manually calculating it for those officials serving nonconsecutive terms in the House. It was not until I started working with the data that I realized the again the most recently elected officials (within the last year) were missing, and so their service years were computed by hand. Of note, Top Rep$ lists number of years served in the current chamber, however some senators sat in the House before the Senate. Career totals for gun rights support and campaign fundraising, however, account for all of their years as an elected federal representative. 
  8. Gun Rights - From the OpenSecrets.org Gun Rights vs. Gun Control document, specifically the "Career Gun $ to 115th Congress" sheet, I added amounts from three columns: Total from Gun Rights (Pink), Gun Control Opposed (Blue), and Gun Rights Support (Blue). Based on FEC data from February 1, 2018, the "money in the 'pink' columns is the total money given to the member's campaign or leadership PAC from gun rights or gun control PACs or individuals in all of CRP's data (back to 1989 for members for whom that is relevant)", and "money in the 'blue' columns is money spent by outside groups supporting and opposing these candidates."
  9. NRA Grand Total - From the same document, specifically the "NRA spending (115th Congress)" sheet, I pulled data from the field, NRA Grand Total. Notes from the Center for Responsive Politics regarding this data: "These are career totals, and so therefore can go as far back as 1989. NRA direct support includes contributions from the NRA PAC and employees to candidates. Indirect support includes independent expenditures (and electioneering communications) supporting the candidate; opposition is IEs and ECs opposing the candidate. 'Independent expenditures for opponent' is spending by the NRA supporting a candidate OTHER than the member listed (note that could be someone of the same party, if they supported someone else in the primary), and "Indep Expend against opponent" is spending by the NRA opposing a candidate OTHER than the member listed. For the grand total, we summed the direct support + indirect support + indep expend against opponent, and then subtracted indirect opposition and indirect expenditures for the opponent. This produces a grand total, which can be, and often is, negative. A negative value indicates that the NRA tends to oppose this member."
  10. Campaign Committee Fundraising Top Industry - After spending some time on the OpenSecrets congressional directory, I decided to maintain consistency with gun rights data and pull career totals for each member of the 115th Congress, in this case the leading industry and total amount contributed. I was curious to see this landscape over time (how do totals compare for varying amounts of years served?) and in comparison to members' committee work (do the top industries align with their assignments?). I tried and failed to scrape this information successfully: except for the representative's name, the top industry name and total amounts always returned empty strings (even though at one point I figured out how to extract the elements' attribute data). While I located a number of related issues on Stack Overflow, nothing panned out. Fully committed to the project at this point and working to meet a printing deadline, I visited each page individually and copied the information I needed. (My thoughts on this non-automated process below.)
  11. Campaign Committee Fundraising Top Industry Amount - See above.
  12. Campaign Contributor Top Contributor - Same as above but for top contributors.
  13. Campaign Contributor Top Contributor Amount -  Same as above but for top contribution amounts.

Additional credits: It's one thing to look at data across a spreadsheet and another to see it take form into individual playing cards in InDesign. To spiffify them I added the Great Seal of the United States 1904 Coat of Arms to the backs, and used the fonts, Gregory Packaging and Proxima Nova Light.

So, yeah: I visited each representative's profile at OpenSecrets.org to copy and paste the campaign fundraising data. I did check the API beforehand, but it did not look helpful for what I intended. Next time I'll investigate requesting a custom data report or troubleshoot my initial scraping hurdle in this area. That being said, retrieving the data became a meditation, and I started to visualize the geography of the country through this one lens of industry that might have eluded me had I not been so thoroughly steeped in the data (or until I actually played the game a million times). Receive large contributions from Microsoft? You most likely represent the state of Washington. Walmart? Arkansas. Funded by oil and gas companies? You probably hail from large states west of the Mississippi. And so on. In addition, although not surprising, committee assignments often matched with top industries and contributors. Serve on the Committee of Financial Services? There's likely a large bank (or several) on your list. The Senate Committee on Health, Education, Labor, and Pensions? Look for the health professionals or pharmaceuticals/health products industries. As for gun rights support recipients, aside from the expected partisan divide, I have more studying to do. But it does make me wonder about other tangible ways of working with this and similar data to see relationships I hadn't otherwise considered. (And for future Ellen: I'm quite intrigued by some of the anomalies tracked on OpenSecrets.org, such as the percentage of contributions received by politicians from out of their states.) 

Week 6: Photo Booth Prototype


Add yourself to the gallery here (while it lasts): Photo Booth Prototype
Code on GitHub

It all clicked into place, everything we've learned in these brief six weeks. Though not ready for an audience beyond my in-class demo, I made substantial progress on converting my photo booth into an online prototype, and I'm excited for my peer's critical feedback. In the progress, I incorporated nearly all of the web app features we practiced this semester and developed a solid understanding of how they integrate together in one environment. A Node server using the Express framework, my web app serves over HTTPS, handles GET and POST requests, uses an EJS template engine, and saves and retrieves data from a connected cloud database, making use of AJAX/JQuery methods to do so with JSON-formatted data. I also ventured out of the comfort of the P5 library to combine HTML, CSS, and JavaScript with my P5 Canvas (a HUGE step for me). To support this entire process, I learned how to use the console (of both my server and the browser) to troubleshoot code errors and print out data as I built out changes and additions. My increasing familiarity with reading and writing code in various syntaxes sharpened my online search inquiries, and I'm now good friends with Stack Overflow. In addition, I grew increasingly comfortable with git and GitHub to back up my project and deliver it to my server. 

For this most recent iteration of my project, I updated the user flow. Prior to this week, the user experience was a quick one: click the start button, see a random image, and then view the resulting diptych immediately afterwards. Curious about the code requirements to display more than one image, I modified the process and added two extra images.

Here's an outline of the current version including the behind-the-scenes tech:

Now upon reaching the site, one is welcomed with a brief explanation that their portrait will be taken and displayed in a gallery shared with other participants. Should they choose to enter, they must press a button to position themselves in front of their computer's camera. Opening this initial page should trigger a browser message requesting access to the computer's camera, and once granted, turn it on immediately (notice the red dot in the Chrome browser tab) to prevent delays in subsequent stages of the app.

Screen Shot 2018-03-08 at 12.23.15 AM.png

Position Yourself for the Camera
A live video feed plays back to the user. Once situated within the frame they click the button to proceed. 

Screen Shot 2018-03-08 at 12.23.21 AM.png

View Images
Next, several images display in succession. Each image is titled as a number--0.jpg, 1.jpg, 2.jpg, making it easy to preload them into an image array when the app starts and call them up when needed here, in both instances with a For loop. Each image displays for 2 seconds, but at the 1.5-second mark behind the scenes a portrait is taken of the viewer (using an offscreen image buffer), that portrait is rendered into a diptych with the other image (again with an offscreen buffer), and the diptych file is saved into a directory (as described in last week's post). For this week's iteration, I figured out how to send the index number of the display image along with the diptych string to the server. Now the diptych's filename and the index of it's accompanying display image are saved in a JSON object in my connected Mongo database. This makes for easy queries and displays of files to the Gallery page. Speaking of which, back to client-side: when the user reaches the final image in the series, a button to the Gallery appears near the bottom of the screen. 

Visit the Gallery
Here users scroll through diptychs grouped by the images they just viewed. This was most technically-challenging for me to solve this week. First, I figured out how to query the database on the index/filename of the display image and draw to screen one related diptych. (Coding best practice: do one thing at a time. First get one image, then focus geting all of them.) For this I wrote a JQuery AJAX method to make a GET request with the display image's index value to a specific route on my server file which it used to query the database. The return response rendered the related dipytch to my Gallery page, an EJS template. Next, I learned how to display all of the related diptychs. I knew to use a For loop to repeat over the JSON objects, but at first I couldn't incorporate the JavaScript into my JQuery method without getting errors. It took a while, but I finally came across this Stack Overflow post which suggested generating all of the HTML I needed with the JavaScript For loop first, storing it in a variable, and then dropping that into my JQuery method. Awesome! Finally, Shawn helped me set the value of each Gallery sidebar image button to equal its index number and on click, pass that value into the call method at the right place. As for the minimal CSS styling, ITP Resident, Mathura, pointed me in the direction of Flexbox to keep all items aligned and scrolling smoothly. 

Feedback for Future Development
On the "position yourself" page: flip the video feed. It's not a mirror image, and people are not used to seeing non-mirrored images of themselves when they look into any kind of reflective surface, including one generated by a live camera feed.

Maybe give additional direction to make sure folks center their faces for their cameras?

Consider the length of viewing time before the user's portrait is taken. Maybe make it longer for folks to arrive at a conclusion or opinion as to what they are seeing. And maybe just show users one image a time, instead of several in row.

What if you captured short video clips or GIFs of participant reactions instead of still photographs? 

There was some confusion as to what people were seeing on the gallery page. Several thought that their portrait was paired with an animal as the result of a machine learning algorithm that matched similar-looking faces. Shawn suggested adding a some text, i.e. "Person looking at _____". 

Provide an option for users to delete their images from the gallery. But also think about how you might integrate the gallery feed with an Instagram or Twitter account.

For many, it all happened too fast to understand what was happening. Isa suggested considering ways to slow down the process. The typical photo booth is usually way too fast, but this is not your typical photo booth. 

Test this version on browsers beyond Chrome and on devices other than a Mac laptop--can you make it responsive? (For example, Amitabh mentioned that the gallery's sidebar pics appeared as hyperlinked text until he changed the resolution of his Windows laptop.)

You're currently saving both images at a time as diptychs, which are ~1.3 MB and a little slow to load from the Digital Ocean Droplet (at least when you're at home). Depending on the gallery design, perhaps just save the portrait of the user and redraw the original image alongside of that to halve the overall file size.

Also, during the in-class demo, diptychs were not necessarily categorized with the correct animal image on the gallery page. This means that the incorrect index number of the animal photo was saved into the database when the diptych was created. Why the misfiring? Is this from multiple uses of the app all at once?

I'm not convinced the project should live online, but I certainly learned a lot in the process of building it out for this platform.

As a project reference, see Kyle McDonald's People Staring at Computers (2011).

A special thanks to both Shawn and Mathura for your support on this project, especially for the extra tips and encouragement along the way! And thanks so much to everyone in class for the constructive questions and suggestions after the demo.

Week 6: Taking Turns in Creature Consequences

After focusing on paired activities last week, we turned to queuing for this week's assignment. Maria and I teamed up with James and Kai, and since all of us created partner-based painting/drawing apps last week, we found inspiration in the drawing adaptation of the surrealists' game, Exquisite Corpse. Also known as Picture Consequences, players take turns drawing a portion of a person or a fantastic creature on the same piece of paper without the other participants seeing each other's sections until the full drawing is revealed at the very end. We looked to these paper-based examples online here and here, as well as Xavier Barrade's awesome online version, as additional models. 

First, the four of us played in a traditional analog way (with a regular 8.5" by 11" piece of paper), folding the page into four sections--one for the head, then the torso, next the waist to knees, and finally, the knees to feet. I'm kicking myself for not documenting this version because seeing our composite creature at the very end was well worth the wait! We had fun, and it motivated us to pursue a digital rendering of the game which we called Creature Consequences.

Using a classroom's whiteboard walls, we sketched out what the screen might look like and outlined the code behavior for each person's turn. We envisioned a screen divided into four even rectangles, each the width of the window, from top to bottom. For each player's turn, their rectangle would temporarily disappear to expose the canvas beneath and onto which they could draw their assigned piece of the figure. Their marks would be constrained to that portion of the canvas only. Pressing the Return key ends their turn and advances the next section of the drawing to the next player in the queue. The canvas is only activated for players with active turns; other participants may not leave marks on the screen while they wait. When the last player completes the feet area, pressing Return hides all rectangles to display the full image to everyone on their own screens.

We adapted and built on top of Mimi's human auto-complete example for the server-side queuing, and methodically worked through each item in our list (mentioned above), play testing as we went to clarify functionality and root out any bugs. To start we figured out how to draw rectangles as HTML elements over our P5 canvas. Then, we figured out how to link their visibility to the queue position of each player. Next we implemented the drawing feature, being sure to send that information to the server to broadcast to all input screens (normalizing the location of the marks by dividing by device width on the emit and also multiplying the incoming data by device width). After solving how to constrain players to their specific sections of the screen, we played another round and after creating a completely misaligned character, decided to extend that range into the next player's portion so each person would know exactly where to continue the drawing. 

Here are a few screenshots from our initial meeting, as well as some hastily-drawn characters from our play testing:

In our opening conversation as a group, we reviewed our experiences of waiting during our in-class games. Either we were engaged on how the game was playing out (or anxious that our turn was approaching in Zip! Zap! Zoop!) or completely tuned out. Kai suggested that in our online game we include the option for waiting participants to interfere with the drawing player's "pen," such as change the hue, stroke, and opacity (alpha) values. A bit of a last-minute addition, all the slider values are emitted and broadcast from the server in one bundle. So all three waiting players are not only blindly submitting values for the visual output, but they are also entangled with one another for control of their own slider position. Needless to say, at this stage in the project's development, the sliders are a bit jumpy.

Play and remix on Glitch



Week 5: Serving Over HTTPS

This week we learned how to encrypt traffic between clients and our servers by implementing HTTPS with a certificate (from a certificate authority that has verified our DNS identity) and a private key. It was perfect timing for my photo booth app: I need HTTPS enabled in order for folks to access their webcams. As a quick refresher, my app snaps a photo of an individual as they view another picture. Afterwards a diptych of both images renders on the screen. After testing HTTPS locally and also on my Digital Ocean Droplet (my server), I set to work on the next phase of my project’s development. 

Though I completed most of my photo booth's general workflow in P5 last week, I needed to figure out how to save the resulting diptych. Before setting out to save it to my server, I focused on saving it to disk. For this I learned that I could store my canvas in a variable and from that copy the specific area of the diptych to then automatically download to my computer with P5's save(). Here's the relevant snippet (the full code is available on GitHub):

function setup() {
  canvas = createCanvas(windowWidth, windowHeight);

function displayDiptych() {
    let doubleFrameX = windowWidth / 6;
    let doubleFrameY = (windowHeight - 4 * windowWidth / 9) / 2; 
    let doubleWidth = windowWidth * 2 / 3;
    let doubleHeight = 4 * windowWidth / 9;
    diptych = createGraphics(doubleWidth, doubleHeight);
    diptych.copy(canvas, doubleFrameX, doubleFrameY, doubleWidth, doubleHeight, 0, 0, doubleWidth, doubleHeight);
    save(diptych, "diptych_" + time + ".jpg");

But for this project to live on my server, I needed a way send the image data there and direct it to the proper folder. This was not a simple task for me to figure out. Fortunately in office hours, Shawn suggested appending elt to my diptych Graphics object in order to use the toDataURL() method.

diptych = createGraphics(doubleWidth, doubleHeight);
diptych.copy(canvas, doubleFrameX, doubleFrameY, doubleWidth, doubleHeight, 0, 0, doubleWidth, doubleHeight);

let imageString = diptych.elt.toDataURL();

This encodes the image into machine readable format known as Base64 produced perhaps the longest string of characters I have ever seen in my browser's JavaScript console—around 600 pages if you were to print it out. To ensure that this was indeed image data, we pasted the string into this online decoder where it returned my original diptych in picture form. Magic! It was truly like being the analog darkroom for the first time to see that incredibly long string turn into a photograph before my eyes and easily the highlight of my week. 

Buy how to send this data to my server? I could not get the P5 httpPost method to work, but fortunately, I found an AJAX/JQuery example on Stack Overflow that did:

let imageString = diptych.elt.toDataURL();

function saveDiptych(img){
  let url = "https://emn200.itp.io:443/save";

        url: url,
        type: "POST",
        crossDomain: true,
        dataType: "json",
            image: img
        success: function(response)
        error: function(response)

But not immediately. Initially, I received the error message in my server's console, “request entity too large.” However, the fact that my server was still seeing the request gave me hope. A search lead me to this suggestion to increase the default limit of the body parser (a module I installed to parse incoming data server-side), and this did the trick.

var bodyParser = require('body-parser');
var urlencodedParser = bodyParser.urlencoded({ extended: true }); 

var express = require('express')
var app = express()


app.use(bodyParser.json({limit: '50mb'}));
app.use(bodyParser.urlencoded({limit: '50mb', extended: true}));

When I console-logged the data of the request on the server side and saw that long string of characters appear (aka my image data) it was easily the second highlight of my week. 

Screen Shot 2018-03-01 at 9.46.56 AM.png

Now that I could send the image string to my server, I needed to decode it back to binary format (jpeg or png and human-readable with the right viewer) and save the file in a directory. This was accomplished by these lines of code, again thanks to Shawn:

app.post('/save', function (req, res) {
  var data = req.body.image;

  var searchFor = 'data:image/jpeg;base64,';
  var strippedImage = data.slice(data.indexOf(searchFor) + searchFor.length);
  var binaryImage = new Buffer(strippedImage, 'base64');

  var timeStamp = Math.floor(Date.now() / 1000);

  fs.writeFileSync(__dirname + '/public/diptychs/diptych_' + timeStamp + '.jpg', binaryImage);

Client and server code on GitHub

I still have quite a bit to do for my app. How will I display all the diptychs? All at once in a gallery or individually by session? And the styling?! Though it's a minimal design, I still need to make decisions about how I want the final piece to look. And of course there remains the question of which batch of images to curate for the booth. Time to get started...

Week 5: Painting with a Partner

Screen Shot 2018-02-27 at 3.58.27 PM.png

Riffing off of the Ouija class example and our group's previous assignment, Maria and I built a collaborative painting app for our paired activity. We were inspired to create an opportunity for a fluid flow of collective contributions, and using our tool, two partners work together to control one brush and one palette. The brush itself is adapted from my pixel painting app. There's no set goal except to play in the screen sandbox with digital paint. (It doesn’t always have to be competitive to be fun, right?) 

We thought about expression for the input and also for the output. Ideally, we hoped to increase the range for both compared to our experience with the Ouija game. First the output. Like that example, the position of our "brush" renders on the output screen according to the average position of both participants on their home input windows. But unlike the example, we removed the stakes of clearing the screen if partners move too far away from one another. Instead, we incorporated distance to one another as a creative decision: the closer the partners the larger the size of the brush, the farther away the smaller it gets. 

In this scenario thus far, both players contribute the same type of input (their position and distance to one another) for a combined output. We then decided differentiate the inputs through the incorporation of paint color: one painter controls the hue, while the other controls levels of brightnesss, greatly expanding the palette options. Using desktops/laptops, clicking and dragging the mouse across the screen from left to right changes these values. Paint does not start to flow until both players have clicked their mice/trackpads.

Finally, we incorporated accelerometer data from our mobile devices to paint our combined brush strokes in the air. Swiping back and forth across the screen (along the X axis) adjusts hue and brightness.

If just using laptops/desktops, you can play here and/or remix on Glitch. 

Navigate here for the mobile version.

Code also on GitHub.

Week 5: Color Image Processing

What fun experimenting with various color manipulation techniques in Processing the past two weeks to get a better understanding of how the red (R), green (G), blue (B), and alpha (A) values interact to produce different results. For this we learned a much faster programming technique, called bit shifting, to extract and set the color information of each pixel. Using bitwise operators, this technique works each pixel as a string of 32 bits--8 bits each for R, G, B, and A. The basic image processing sequence is as follows: first load the pixel array with nested loop, perform a right shift to extract the color information for each pixel (this shifts the bits to the right, masks them, and with & 0xFF pulls out the last 8 bits), manipulate color information as you see fit, and then perform a left shift to reassemble the color information (this packs the 8-bit numbers back into one 32-bit number). Right shifting is much faster for calling color information than the red(), green(), or blue() functions, and likewise, left shifting quicker for assembling color values than the color() function. This post helped me visualize the shifting and masking process.

I explored various algorithms while using my camera's live feed, and posted all the code here since it wasn't compatible with OpenProcessing. Here are the results of my permutations:


White Noise Texture
Riffing off of Rozin's examples, specifically this one, I really enjoyed the textures that emerged from different noise settings. For me the sweet spot for my "posterization" variable was around 2-3. Here the brightness of the pixels are evaluated and set to white if they are beyond the threshold.

import processing.video.*;
Capture camera;  

int posterizeAmount;  
void setup() {
  size(1280, 720);
  background (0);
  camera = new Capture(this, width, height);     

void draw() {
  if (camera.available())  camera.read();      
  for (int x = 0; x<width; x++) {
    for (int y = 0; y<height; y++) {
      PxPGetPixel(x, y, camera.pixels, width);

      //find a noise texture that I like
      //int m = int(map(mouseX, 1, width, 1, 50));
      //posterizeAmount = max(1, m);           

      posterizeAmount = 3;
      if ((R+G+B) % posterizeAmount < posterizeAmount/2) {   
        R += 0;
        G += 0;
        B += 0;
      } else {                                        
        R = 255;  
        G = 255;
        B = 255;
      PxPSetPixel(x, y, R, G, B, 255, pixels, width);   

int R, G, B, A;          
void PxPGetPixel(int x, int y, int[] pixelArray, int pixelsWidth) {
  int thisPixel=pixelArray[x+y*pixelsWidth];     
  A = (thisPixel >> 24) & 0xFF;                
  R = (thisPixel >> 16) & 0xFF;                  
  G = (thisPixel >> 8) & 0xFF;   
  B = thisPixel & 0xFF;

void PxPSetPixel(int x, int y, int r, int g, int b, int a, int[] pixelArray, int pixelsWidth) {
  a = a << 24;                       
  r = r << 16;                      
  g = g << 8;                    
  color argb = a | r | g | b;        
  pixelArray[x+y*pixelsWidth]= argb;    
20180225Screen Shot 2018-02-25 at 1.34.06 PM.jpg

White Noise in Black and White
For a completely monochromatic effect, simply replace the R and G values with B as the pixels are set back to the screen. (Or set R, G, and B to all R or all G.) For this example, I also set A to 150. What I most enjoy about this example is that it reminds me of an analog graphite drawing.

PxPSetPixel(x, y, B, B, B, 150, pixels, width); 

Color Noise
In this example, I've change the color of the noise and also introduced the ability to adjust the R, G, B, and A values according the position of my mouse on the canvas before and as they are set to screen using the left shift function. For this particular image, my mouse was sitting in the upper middle portion of the window. For monochrome noise color, simply hardcode the R, G, and B values after "else" in the conditional statement and then remove the ability to set them dynamically with your mouse.

  for (int x = 0; x<width; x++) {
    for (int y = 0; y<height; y++) {
      PxPGetPixel(x, y, ourVideo.pixels, width);

      posterizeAmount = 3;
      if ((R+G+B) % posterizeAmount < posterizeAmount/2) {  
        R += 0;
        G += 0;
        B += 0;                                         
      } else {                                        
        R= G+B;   
        G= R+B;
        B= R+G;

      R += mouseX; 
      G += mouseX;
      B += mouseX; 

      PxPSetPixel(x, y, R, G, B, mouseY, pixels, width); 

Masking with Low Alpha
I discovered that in general a low alpha value upon setting pixel information back to the screen resulted in a blow-out and with the proper lighting, often created a mask around features in a certain tonal range. Here the alpha is set to 50:  PxPSetPixel(x, y, R, G, B, 50, pixels, width); This worked regardless of whether or not I added noise to the image. In the color noise example above, alpha is also set to low value due to the position of my mouse.


Inverting Colors without Noise
Without any noise applied and a constant alpha of 255, I played with inverting the colors for some very satisfying saturated results. Here I'm subtracting R for each channel.

PxPGetPixel(x, y, ourVideo.pixels, width);
R = 255-R;
G = 255-R;
B = 255-R;

PxPSetPixel(x, y, R, G, B, 255, pixels, width);
20180225Screen Shot 2018-02-25 at 1.46.54 PM.jpg

With Contrast Lighting
Different lighting sometimes produced dramatic results for all of the code examples. Again no noise applied here, but R, G, and B are each set to the sum of their partners.

PxPGetPixel(x, y, ourVideo.pixels, width);
R = G + B;
G = R + B;
B = R + G;

PxPSetPixel(x, y, R, G, B, 255, pixels, width);

All code adapted from these sketches by Danny Rozin.

Week 4: Single Page Web App

Is it sunny on Mars?
New features, new languages, new syntax! It's another week in DWD, and this time around we learned how to build single page web applications using jQuery with AJAX and experimented with transmitting JSON-formatted data to and from an Express server. For practice, I built an app to display a photograph taken by NASA's Mar's Curiosity Rover for any given day since it landed on the planet in August 2012. In this week's project the browser does all the work from the code in my html file: it makes a call to the NASA API and displays an image on the page for whatever date is submitted without refreshing the page; no data is sent nor received from my own server. I'm still wrapping my head around how all of this works, but my novice understanding is it's possible for my client-side browser to communicate with NASA's server and load the data dynamically through the use of a type of JavaScript called AJAX, and using the JQuery library makes AJAX methods a lot easier. Unlike the JSON data feed from the OpenWeatherMap example in class, I realized that NASA's feed contains an array of nested objects. The rover takes multiple images a day (over 332,000 since it started in 2012), and in order to render just one of them from any given day in my app, I needed to specify its index number when declaring the value to pull during the API call. To run this I set up a basic HTTP server, the same as from the first week of class.

Curious? Check it out here (for now): marsscapes

Code on GitHub

Final Project Proposal: Photo Booth App
I want to build an online photo booth app, and I have two ideas: 1) build a booth with my own custom image processing algorithms or 2) photograph users' as they look at other images, displaying both shots in a diptych at the end (to see the viewer and the viewed at the time of the viewing). I'm interested to use this latter design with the first idea, too. The second idea is a online remake of a Processing project and would allow me to share it with more people. For either, I'll need to learn how to serve over HTTPS, a secure version of HTTP, in order to utilize laptops' webcams. As part of my preparation this past week, I drafted a version of the second idea in P5 here.