Week 8: Interactive Cubist Portrait

Hey pixel by pixel. It’s been a while. I’ve been busy playing and learning all about area filters that  utilize and impact neighborhoods of pixels (for blurring and sharpening details)* and how to make geometric transformations pixel by pixel. I’ve been especially enjoying manipulating video feed and for the midterm, developed a collage of movable video fragments, creating an interactive cubist self-portrait of sorts. My current work in Collective Play has me considering how to design experiences with meaningful output, and playing with distortions of one’s own face is charged with expressive potential. 

Furthermore, the new show at The Met Breuer, Like Life, with is two-floor showcase of sculptural renderings and other physical abstractions of the human body from the past 700 years inspired me. Specifically the surreal photographs of Hans Bellmer and a sculpture, Emma by Dorothea Tanning, play with the fascination with distorting the human form. A quote from Bellmer in the museum didactic reads, “the body is like a phrase that invites us to disjoint it (to pull it apart), so that it can be recomposed through an infinite series of anagrams.”  Exactly, Hans. 

Unrealized until recently, this project continues code investigations I explored in my photo booth prototype from DWD Online Server and in my social collaging examples from Socially Engaged Art. From the former, I’m taking advantage of copying pixels from the video feed and placing them into new image objects. And from the latter, I brought over my understanding of creating a class of objects—in this case rectangles, creating an array of these rectangles, filling them with content, and adding the ability to click and drag them around the screen. (Isn't it wonderful when it all comes together?)

It might sound like a seamless process, however it was anything but. In part because I had to learn Processing's syntax for this, which differs enough from my P5.js sketches enough to make me think carefully. But also because my creative process was anything but linear. After multiple code sketches, I honed in on the idea and then worked down two paths: focusing on the video crops for one and building out the rectangles’ array and class in the other. A huge thanks to Danny who helped me sew it all together and for showing me a technique to generate better random values for cropping and pasting sections of the screen into their randomly-scattered rects. And this is where the real fun comes into play: cropping smaller or larger sections from the video than will comfortably fit into their new rectangle containers either proportionally enlarges/shrinks and/or stretches/condenses the original source material.

With the basic functionality in place, I annotated my code to trace the path that generates the size of the resulting rects. From that exercise I updated my code to produced rects at random sizes upon each run of the sketch.

Next, I flipped the video capture along the X-axis. Processing and P5 do a funny thing, the video image you see is not mirrored as you might expect: you move right and your you in the video moves left. While it was easy to flip the background video capture, figuring out how to apply this to the top layer video fragments was…uhhh…a special case of committed problem solving. I had to figure out how to not only keep the rects on the screen and flip the video within them but to also translate the flip to the incoming mouse coordinates such that I could continue to drag the rects around the screen as expected. It turns out that the answer was relatively simple. In the Rectangle class, I boxed everything about how the video rectangles display into a pushMatrix()/popMatrix() transformation state, and in addition to using scale(-1,1) included a translate(width,0), all of which moved the origin of the underlying coordinate system from the upper left corner to the bottom right. Then, I modified the code describing what to check if the mouse is pressed. When the mouse is pressed the program asks, "Is the mouse over a rectangle? If so, move that rectangle according to any mouse movements." Since my rectangles operate on their own special coordinate system now, my mouse's X coordinate is actually width-mouseX to them, and in my code, the incoming mouseX, known as px, becomes: int mx = width-px. So this new value, mx, is checked if the mouse is pressed on the screen to see if it’s within one of the rectangles, and it’s also subtracted from the rectangle’s offset value (its middle, width/2). Finally, because mouseX is sent to an update function within the Rectangle class, I also had to subtract it from the width of the screen to generate the new starting X coordinate of the shifted rectangle. Whew!

Finally, I experimented with all of the pre-built filters and blend modes (Danny suggested these might be faster than pixel by pixel methods and my sketch is already slow) on both the background camera feed and the top layer video rectangles. In the end, I decided to keep it simple. Concluding adjustments include:

  • Randomly setting the transparency level of each rectangle within a range of 160-255
  • The option to save a screenshot by pressing the 's' key
  • The ability to add or remove additional video rectangles by pressing the 'Up' and 'Down' arrows respectively

Here's a rough visual summary of my sketch's progression, including a surprise at the end! Turns out when you apply a blendMode(LIGHTEST) to the background camera feed and a blendMode(DARKEST) to the rectangles, you allowing for surreal painting effect heightened by the combined blends. I know, I know: I said I was going to keep it simple, but I couldn't resist these additional touches.

*A neighborhood effect filter is a nested loop embedded within the nested loop that calls all of the pixels. Nested loops within nested loops—oh my! Basically, when we want to blur or sharpen an image we combine the pixel array with a new array that looks at each pixel’s surrounding neighbor pixels. There are many types of sharpening and blurring filters, but in the most general terms: sharpening increases the contrast between pixels and blurring averages the values from its neighborhood pixels, decreasing the sharp edges and details for a smoothing effect. Danny gave us an easy way to remember all of this: “blurry pixels are friendly pixels” and of course, “sharpened pixels are not.” I also found this useful resource for visualizing the process.

Code on GitHub