Pixel by Pixel

Week 14: Painting Mirror

Move the pieces and assemble your reflections in this interactive Cubist mirror.
How many ways can you put yourself together?


Project Description
In real life and in digital spaces we offer multiple versions of ourselves for numerous reasons. Painting Mirror plays with this idea by proposing continuous ways of seeing and presenting yourself. Using dual camera angles and the open source software, Processing, video reflections are fragmented, resized, duplicated, and scattered across the surface. The pieces blend together and moving them paints the screen anew. The impulse might be to reassemble your image into a whole, but can you? 

This is a continuation of a recent project, which I desired to push further to see what I might uncover. Though there's more to explore, I learned a lot along the way. Here’s a recap of the highlights. 

Part 1: Leaving the Land of Rects
My first instinct was to move beyond the land of rectangles and draw irregular quadrilaterals. I learned that I could use the beginShape() and endShape() functions, with vertices of X and Y coordinates in between, to draw more complicated polygons (see the PShape Shiffman tutorial). Vertices are drawn in a counterclockwise direction starting with the leftmost corner. 

Using this Danny Rozin example from class as a guide, I adapted my original sketch in a similar fashion. The polygon constructor still receives random values for an X and a Y, but instead of using these as the location for the leftmost corner of the shape, they become its centroid. Random numbers are then generated for the calculation of each X and each Y of every new vertex. Those values are stored in a two-dimensional array or matrix and each corner of the shape is created (again in counterclockwise fashion) by subtracting them from or adding them to the X and Y values of the centroid. Though we’ve been working with image matrices all semester, this exercise strengthened my understanding of how I might use them other contexts.

In the previous version of this project, I used the copy() function to grab and duplicate pixels from the canvas into my rects. Danny’s sketch and this tutorial showed me how to load and map parts of the camera feed into my irregular quadrilaterals using texture(). As in the earlier version, I’m still playing with the scale of the newly-mapped areas when doing so. 

In order to employ texture(), I changed the default rendering mode to P3D, which uses OpenGL and increases the overall speed of my sketch.

Part 2: Oh Yeah, Mouse Dragging
Once I converted the shapes, it was time to update the mouse dragging. It was no longer a simple process of checking whether the mouse was within bounds of a plain old rect. After falling down several rabbit holes for a solution, Shiffman recommended two options: drawing all the polygons offscreen as PGraphics objects each with their own unique color to check or try the library, Toxiclibs.js. And then I found a third: Karsten Schmidt, the author of Toxiclibs wrote an example on the Processing forum taking advantage of this point-in-polygon algorithm. It’s possible that I came across this before, but it was too early in the quest for me to understand what I was seeing and what I needed. In any event, I incorporated it into my sketch, using another array to store all of the vertices of each shape, and it worked!

(I remember adjusting the offSet calculations, too. In my original version, offSets for X and for Y where based on the center of each rect (width/2 and height/2), but now that I had a centroid, it was easy to just replace those with the X and Y centroid values.)

Part 3: Adding Another Camera
One camera is great, but with two webcams I can show the viewer dual angles of themselves simultaneously. So I created a new class in my Processing sketch to feed the second camera into a new batch of polygons. The code is nearly identical except the part of declaring a different camera in the texture() function. Gosh, I’m repeating so much code! Therefore, there MUST be a way to rewrite it more efficiently. I haven’t wrapped my head around how to do that just yet...it awaits as a very fun problem to solve. (I’m also still wondering if I might want to treat the camera feeds differently, and at least for right now, keeping them separated gives me the space to think about the possibilities.)

Of note, I took heed of Danny's suggestion to use different camera models to avoid conflicts and headaches: I'm using a Logitech C920 and C922

Part 4: Taking Advantage of ITP's Touch Screens
It's one thing to develop within the comfort of your own laptop environment, but I envisioned this to display on a touch-screen for users to play with their own images. First I messed with the screens along the hallway to the lounge, quickly realizing that any new piece of hardware in the mix requires it's own special attention with regards to the logistics of wire connections as well as finding the right resolution to agree with my Processing canvas size. But those screens are out of reach for kids, and I so turned my focus to the large vertical-hanging Planar monitor near the elevators in ITP's lobby. It took some wire wrangling, HDMI port-finding on the back, installing of drivers, and help from Marlon, but we got it to work on one of the ER's MSI laptops, which was waaay faster than my Mac.

Instead of changing the laptop display to portrait mode to fit the new screen as I began to do, Danny offered the BRILLIANT suggestion of simply rotating the actual cameras and attaching (with tape for now) to the sides of the monitor. For my final Pixel by Pixel iteration, I attached one camera to the side of the screen and mounted the other on a tripod behind the user. Both camera feeds streamed onto the display, but to be honest, the "back" version wasn't as noticeable. For the Spring Show version, I plan to mount both cameras on opposite sides of the screen; folks want to find their faces so give them more of those, I will!

Part 5: Finishing Up & Conclusions - Is it a Mirror, Painting, or Portrait?
Standing before the screen, one's image is elusive: the polygons act like jumbled lenses, and it becomes a puzzle to assemble yourself. Distorting the face and figure is endless entertainment for me, but it's possible shapes are too sharp. Their jaggedness, however, is softened by the fading painterly effect of the combined blend modes.

Shortly after I began this project, I started saving screenshots to document my process and not long after, considered it a type of photo booth. Without this feature, it's simply a funny mirror, albeit one with plenty more options to alter your appearance. Since it's migration to the big monitor, I incorporated a double-tap feature to save the screen sans keyboard using this old forum post as a reference. Processing tells me that the mouseEvent, getClickCount(), is deprecated, but hey, it still works.

Thanks!
Thank you so much, Marlon! I'm very grateful for your time with the Planar. Also to Danny Rozin for the inspiration and to Dan Shiffman for checking in on his way down the hall. And to my many wonderful and talented classmates who stopped by watch, play, chat, and give advice: I am in awe of you everyday. 

Code on GitHub
Portraits created at the ITP Spring Show 2018 @paintingmirror

Week 8: Interactive Cubist Portrait

Hey Pixel by Pixel. It’s been a while. I’ve been busy playing and learning all about area filters that utilize and impact neighborhoods of pixels (for blurring and sharpening details)* and how to make geometric transformations pixel by pixel. I’ve been especially enjoying manipulating video feed and for the midterm, developed a collage of movable video fragments, creating an interactive cubist self-portrait generator of sorts. My current work in Collective Play has me considering how to design experiences with meaningful output, and playing with distortions of one’s own face is charged with expressive potential. 

Furthermore, the new show at The Met Breuer, Like Life, with is two-floor showcase of sculptural renderings and other physical abstractions of the human body from the past 700 years inspired me. Specifically the surreal photographs of Hans Bellmer and a sculpture, Emma by Dorothea Tanning, play with the fascination with distorting the human form. A quote from Bellmer in the museum didactic reads, “the body is like a phrase that invites us to disjoint it (to pull it apart), so that it can be recomposed through an infinite series of anagrams.”  Exactly, Hans. 

Unrealized until recently, this project continues code investigations I explored in my photo booth prototype from DWD Online Server and in my social collaging examples from Socially Engaged Art. From the former, I’m taking advantage of copying pixels from the video feed and placing them into new image objects. And from the latter, I brought over my understanding of creating a class of objects—in this case rectangles, creating an array of these rectangles, filling them with content, and adding the ability to click and drag them around the screen. (Isn't it wonderful when it all comes together?)

It might sound like a seamless process, however it was anything but. In part because I had to learn Processing's syntax for this, which differs enough from my P5.js sketches to make me think carefully. But also because my creative process was anything but linear. After multiple code sketches, I honed in on the idea and then worked down two paths: focusing on the video crops for one and building out the rectangles’ array and class in the other. A huge thanks to Danny who helped me sew it all together and for showing me a technique to generate better random values for cropping and pasting sections of the screen into their randomly-scattered rects. And this is where the real fun comes into play: cropping smaller or larger sections from the video than will comfortably fit into their new rectangle containers either proportionally enlarges/shrinks and/or stretches/condenses the original source material.

With the basic functionality in place, I annotated my code to trace the path that generates the size of the resulting rects. From that exercise I updated my code to produced rects at random sizes upon each run of the sketch.

Next, I flipped the video capture along the X-axis. Processing and P5 do a funny thing, the video image you see is not mirrored as you might expect: you move right and your you in the video moves left. While it was easy to flip the background video capture, figuring out how to apply this to the top layer video fragments was…uhhh…a special case of committed problem solving. I had to figure out how to not only keep the rects on the screen and flip the video within them but to also translate the flip to the incoming mouse coordinates such that I could continue to drag the rects around the screen as expected. It turns out that the answer was relatively simple. In the Rectangle class, I boxed everything about how the video rectangles display into a pushMatrix()/popMatrix() transformation state, and in addition to using scale(-1,1) included a translate(width,0), all of which moved the origin of the underlying coordinate system from the upper left corner to the bottom right. Then, I modified the code describing what to check if the mouse is pressed. When the mouse is pressed the program asks, "Is the mouse over a rectangle? If so, move that rectangle according to any mouse movements." Since my rectangles operate on their own special coordinate system now, my mouse's X coordinate is actually width-mouseX to them, and in my code, the incoming mouseX, known as px, becomes: int mx = width-px. So this new value, mx, is checked if the mouse is pressed on the screen to see if it’s within one of the rectangles, and it’s also subtracted from the rectangle’s offset value (its middle, width/2). Finally, because mouseX is sent to an update function within the Rectangle class, I also had to subtract it from the width of the screen to generate the new starting X coordinate of the shifted rectangle. Whew!

Finally, I experimented with all of the pre-built filters and blend modes (Danny suggested these might be faster than pixel by pixel methods and my sketch is already slow) on both the background camera feed and the top layer video rectangles. In the end, I decided to keep it simple. Concluding adjustments include:

  • Randomly setting the transparency level of each rectangle within a range of 160-255
  • The option to save a screenshot by pressing the 's' key
  • The ability to add or remove additional video rectangles by pressing the 'Up' and 'Down' arrows respectively

Here's a rough visual summary of my sketch's progression, including a surprise at the end! Turns out when you apply a blendMode(LIGHTEST) to the background camera feed and a blendMode(DARKEST) to the rectangles, you allowing for surreal painting effect heightened by the combined blends. I know, I know: I said I was going to keep it simple, but I couldn't resist these additional touches.

*A neighborhood effect filter is a nested loop embedded within the nested loop that calls all of the pixels. Nested loops within nested loops—oh my! Basically, when we want to blur or sharpen an image we combine the pixel array with a new array that looks at each pixel’s surrounding neighbor pixels. There are many types of sharpening and blurring filters, but in the most general terms: sharpening increases the contrast between pixels and blurring averages the values from its neighborhood pixels, decreasing the sharp edges and details for a smoothing effect. Danny gave us an easy way to remember all of this: “blurry pixels are friendly pixels” and of course, “sharpened pixels are not.” I also found this useful resource for visualizing the process.

Code on GitHub

Week 5: Color Image Processing

What fun experimenting with various color manipulation techniques in Processing the past two weeks to get a better understanding of how the red (R), green (G), blue (B), and alpha (A) values interact to produce different results. For this we learned a much faster programming technique, called bit shifting, to extract and set the color information of each pixel. Using bitwise operators, this technique works each pixel as a string of 32 bits--8 bits each for R, G, B, and A. The basic image processing sequence is as follows: first load the pixel array with nested loop, perform a right shift to extract the color information for each pixel (this shifts the bits to the right, masks them, and with & 0xFF pulls out the last 8 bits), manipulate color information as you see fit, and then perform a left shift to reassemble the color information (this packs the 8-bit numbers back into one 32-bit number). Right shifting is much faster for calling color information than the red(), green(), or blue() functions, and likewise, left shifting quicker for assembling color values than the color() function. This post helped me visualize the shifting and masking process.

I explored various algorithms while using my camera's live feed, and posted all the code here since it wasn't compatible with OpenProcessing. Here are the results of my permutations:

01-funbooth.jpg

White Noise Texture
Riffing off of Rozin's examples, specifically this one, I really enjoyed the textures that emerged from different noise settings. For me the sweet spot for my "posterization" variable was around 2-3. Here the brightness of the pixels are evaluated and set to white if they are beyond the threshold.

import processing.video.*;
Capture camera;  

int posterizeAmount;  
   
void setup() {
  size(1280, 720);
  background (0);
  frameRate(120);
  camera = new Capture(this, width, height);     
  camera.start();                               
}

void draw() {
  if (camera.available())  camera.read();      
  camera.loadPixels();                     
  loadPixels(); 
  for (int x = 0; x<width; x++) {
    for (int y = 0; y<height; y++) {
      PxPGetPixel(x, y, camera.pixels, width);

      //find a noise texture that I like
      //int m = int(map(mouseX, 1, width, 1, 50));
      //posterizeAmount = max(1, m);           

      posterizeAmount = 3;
      if ((R+G+B) % posterizeAmount < posterizeAmount/2) {   
        R += 0;
        G += 0;
        B += 0;
      } else {                                        
        R = 255;  
        G = 255;
        B = 255;
      }                                              
      PxPSetPixel(x, y, R, G, B, 255, pixels, width);   
    } 
  }
  updatePixels(); 
  //println(posterizeAmount);
}

int R, G, B, A;          
void PxPGetPixel(int x, int y, int[] pixelArray, int pixelsWidth) {
  int thisPixel=pixelArray[x+y*pixelsWidth];     
  A = (thisPixel >> 24) & 0xFF;                
  R = (thisPixel >> 16) & 0xFF;                  
  G = (thisPixel >> 8) & 0xFF;   
  B = thisPixel & 0xFF;
}

void PxPSetPixel(int x, int y, int r, int g, int b, int a, int[] pixelArray, int pixelsWidth) {
  a = a << 24;                       
  r = r << 16;                      
  g = g << 8;                    
  color argb = a | r | g | b;        
  pixelArray[x+y*pixelsWidth]= argb;    
}
20180225Screen Shot 2018-02-25 at 1.34.06 PM.jpg

White Noise in Black and White
For a completely monochromatic effect, simply replace the R and G values with B as the pixels are set back to the screen. (Or set R, G, and B to all R or all G.) For this example, I also set A to 150. What I most enjoy about this example is that it reminds me of an analog graphite drawing.

PxPSetPixel(x, y, B, B, B, 150, pixels, width); 
04-funbooth.jpg

Color Noise
In this example, I've change the color of the noise and also introduced the ability to adjust the R, G, B, and A values according the position of my mouse on the canvas before and as they are set to screen using the left shift function. For this particular image, my mouse was sitting in the upper middle portion of the window. For monochrome noise color, simply hardcode the R, G, and B values after "else" in the conditional statement and then remove the ability to set them dynamically with your mouse.

 loadPixels(); 
  for (int x = 0; x<width; x++) {
    for (int y = 0; y<height; y++) {
      PxPGetPixel(x, y, ourVideo.pixels, width);

      posterizeAmount = 3;
      if ((R+G+B) % posterizeAmount < posterizeAmount/2) {  
        R += 0;
        G += 0;
        B += 0;                                         
      } else {                                        
        R= G+B;   
        G= R+B;
        B= R+G;
      }                                              

      R += mouseX; 
      G += mouseX;
      B += mouseX; 

      PxPSetPixel(x, y, R, G, B, mouseY, pixels, width); 
    }
  }
  updatePixels(); 
05-funbooth.jpg

Masking with Low Alpha
I discovered that in general a low alpha value upon setting pixel information back to the screen resulted in a blow-out and with the proper lighting, often created a mask around features in a certain tonal range. Here the alpha is set to 50:  PxPSetPixel(x, y, R, G, B, 50, pixels, width); This worked regardless of whether or not I added noise to the image. In the color noise example above, alpha is also set to low value due to the position of my mouse.

06-funbooth.jpg

Inverting Colors without Noise
Without any noise applied and a constant alpha of 255, I played with inverting the colors for some very satisfying saturated results. Here I'm subtracting R for each channel.

PxPGetPixel(x, y, ourVideo.pixels, width);
                                          
R = 255-R;
G = 255-R;
B = 255-R;

PxPSetPixel(x, y, R, G, B, 255, pixels, width);
20180225Screen Shot 2018-02-25 at 1.46.54 PM.jpg

With Contrast Lighting
Different lighting sometimes produced dramatic results for all of the code examples. Again no noise applied here, but R, G, and B are each set to the sum of their partners.

PxPGetPixel(x, y, ourVideo.pixels, width);
                                          
R = G + B;
G = R + B;
B = R + G;

PxPSetPixel(x, y, R, G, B, 255, pixels, width);

All code adapted from these sketches by Danny Rozin.