PCOMP and ICM Final – Week 14

Based on the results of the user testing, we made several modifications to the design of our project:

  • Feedback: currently, the audio interface happens at the front, while the tangible interface happens at the back. This creates a discrepancy between the two interactions, and we need to find a way to bridge them.
  • Adjustments: to allow the shift of attention smoother for the users, we decided to push the cage backwards (making it closer to the projection plane), so that physically they are aligned at the same place.

 

  • Feedback: the audio interface of the first step gives a command, while the one for the second step gives a suggestion. 
  • Adjustment: since the audio interface is intentionally kept for possibilities of more complex/interesting audio inputs in the future, we decided keep it, and we adjusted the audio instructions of the second steps to be commands as well.

 

  • Feedback: since the projected tweets are in green and red color, and they’re projected on the stairs (which looks rectangular), they confused people with the “green & red boxes” where the actual interaction should happen.
  • Adjustment: we removed the green and red colors of the projected tweets, and only keep the colors for the food containers of the cage. In this way, users should be less likely to be confused about what the “green food container” and “red food container” are referring to.

 

We also considered about removing the stairs and projecting the tweets on the wall directly. In this setup, we’ll put the cage on a stand, and put 2D bird cutouts or 3D bird models onto the wall at both sides of the cage, and project the tweets onto the wall by each bird. This design can remove the length restrictions imposed by the stairs, and it can give a larger room to the cage and making it the prominent element for interaction. We’ll do a round of testing to find out whether this new design works better than the old one.

PCOMP and ICM Final – Version 3

After collecting feedbacks for the second version of the bird cage device, we discovered the following issues regarding its design:

  1. The interaction between the users and the project is flat, as the device only takes in one user input (which is assigning topics to the cages) and respond only once, regardless of how fancy the response looks.
  2. The previous design makes a metaphor between tweets from Tweeter and the shape of bird cage, which make some sense. But this is mostly a connection based on the visual aspects of the two, rather than the tangible aspects – in other words, how a user is actually interacting with the physical form of a bird cage, the space that the cage occupies, and the bird inside the cage, etc., is not considered throughly enough.

After some discussion with Nick during the weekend, we decided to throw away the idea to make a physical representation of Twitter API by weighting how many people tweets on two given topics, and switch to focus on one of the actions that are most important for a person to own a bird cage – to put a bird into the cage. Based on this, we made a cardboard prototype to simulate a bird luring process. By throwing in hashtags of a topic on Twitter, we will lure an invisible bird that represents the Twitter data on this topic into the bird cage.

 

Following this design and a quick feedback from Tom, we further discussed what are the possible interactions that can both take advantage of people’s bird keeping behavior, and at the same time connects it to some intellectual activities that cannot be achieved by merely keeping a physical bird from the nature. I realized that since the bird is a representation of Twitter data, it is also a representation of the public’s opinion of a topic.

In the real world, by feeding the bird with some food and observing how the bird react to that food, we can know whether the bird likes it or not. In the same sense, if we feed the public data bird with a particular kind of thinking, by observing the bird’s reaction, we can have a physical glimpse into the public’s opinion or attitudes towards that thinking.

Further, I realized that this provides an opportunity for us to measure or rethink how our perceptions of social phenomenon are different from the larger public. Living in a physical environment, the opinions we possess are inevitably influenced by the people we engage with on a daily basis. For instance, the result of the recent midterm election in New York turned out to be quite different from a lot of people’s prediction at NYU. As someone who lives in a liberal community inside a liberal state, it is not uncommon that she/he’s gauging of the public’s attitude is somewhat biased. And this provides a good opportunity for us to reveal this bias through our feeding action with a public data bird.

So for version 3, we’re planning to create an invisible bird that allows users to guess and reflect on the guessing about public’s attitudes towards controversial topics on social media, as shown below:

PComp and ICM Final – Version 2

After collecting feedbacks from the lovely ITP community, I realized that the previous design about aligning AR with the physical world is a bit farfetched, in terms of the following aspects:

  1.  The starting point of the project is not self-explanatory enough. As urban people nowadays rarely play with sand, the “grab-wand-and-draw-pattern” action will need to be explicitly instructed, or printed out on the  project itself. And this somewhat makes the project not intuitive.
  2. The physical feedback of actions occurring in the AR world are missing, since the battle is visible only in the AR world. And the audience wouldn’t notice the changes in the physical world unless the monster is defeated in AR and then transforming into a physical smiling face. This somehow gives a feeling of “using AR for AR’s sake”. It is likely that the audience will merely sees it as an enhanced AR game.
  3. Since the audience’s commands are derived from continuous drawings on the sand, it requires them to clean up the previous pattern before a new one is drawn. The cleaning process can be a bit cumbersome, and the project does not provide a feedback of a “clean and ready” status. Also, the timing of cleaning and drawing can be tricky, since everyone may do it differently.

Given the above comments, I realized that emphasizing on the “interaction between moving AR objects and physical objects” is perhaps not a very good idea, or at least, it requires other forms of designs to make them working together really well.

After a discussion with my ITP classmate Nick Tanic, I realized that instead of focusing on the “movement relationship” between AR and physical world, maybe focusing on their differences in “visibility” is a better idea. As we know, physical objects are present all the time, while for AR objects, we still need an interface or a medium, like an AR-enabled phone, to actually see them. Since Nick is having an idea about using bird cages to visualize tweets, it rang my bell as it could become a wonderful stage for playing with this difference in visibility. So, we decided to collaborate on our finals, and. here comes the design: an AR-enabled bird cage system that visualize the quantitative differences in any given two topics/items/phenomenon and reveals trends of people’s topic selection.

 

Major Functional Elements

The concept of this installation originates from a idea about making a tool to measure social phenomenon. It will be comprised of four major parts:

  • a chandelier-like hanging structure
  • two empty bird cages that can move up and down
  • an application that is connected to the Twitter API and instructs the installation to move one bird cage up and the other down, behind the scene
  • an application that reveals what’s inside the two bird cages by using AR

 

How it works

The bird cage installation is made to measure social trends on twitter. The audience can come up with two hashtags they want to compare, (e.g. pizza vs burger, or Trump vs Hillary), and assign these hashtags to different cages. The up and down movements of the two bird cages will be synchronized with number of people who tweet about each topic. To see what’s being compared visually, the audience will pull out their phones and enable AR to see what’re inside the birdcages and being compared.

Since the audience will be doing two tasks: the topic assignment and the AR observation, it’ll be boring or less surprising for them to perform these two tasks in this order, since the assignment process somewhat gives away the AR contents. On the other hand, it will be interesting to reverse the sequence, and interpret it under a larger audience context. A possible workflow will be:

  1. We pick two topics and assign them to the bird cages;
  2. The cages move accordingly;
  3. Audience A comes to the bird cages, and see two imbalanced cages;
  4. Audience A pulls out his/her phone, and discover what are really being compared in AR;
  5. Audience A assign two new topics, and leave;
  6. The cages move accordingly;
  7. Audience B comes to the bird cages, and see two imbalanced cages;
  8. So on so forth.

And this creates an interaction among the installation and the audience flow over time.

 

ICM Week 5: Arrays

This week I made a motion tracking Match Man that can track your pose in front of a camera and turn it into a “stop-motion style” animation. This should be an ideal tool for any Kong Fu practitioner who wants to learn the Sonic Punch from the master but is too shy to ask in person. 😀

Here is what it feels like in Super Slow Motion mode:

Here’re the movements:

And here’s the guy who slowly did the Sonic Punch…

This project is built on top of a machine learning library called ML5. It is a super friendly JavaScript library which empowers anyone with even just a beginner-level coding skills to take advantage of the mighty AI and to do all kinds of stuff, from image classification to pitch detection. In this project, I used a model called PoseNet that can track the movement of a human body in front of an everyday web camera.

I also cooperated with Ellie Lin to create a Bread Man that can turn yourself into a piece of moving bread.  Check it out and have fun!

 

 

 

ICM & Visual Language – Week 4: “Palette of Me” Image Editor

This week we were asked to pick a palette of our own for the Visual Language class, and try to use functions and objects to draw a sketch with P5.js. I combined my works of both and made an image editor that can assign a 5-color palette to any given image. To begin with, I’ll start with the palette I’ve chosen.

Palette of Me
On a lovely Sunday afternoon of tons of sunshine, I wandered around the Rubin Museum of Arts with a friend and bumped into some magnificent pieces of Himalayan arts. As someone previously having some but not much exposure to Tibetan and Hindu culture, it was quite a fascinating and eye-opening visit.  Here are just a few examples of the beautiful pieces:
My favorite among them is this smiling deity portrait shown below, which in my opinion, skillfully stroke a balances between peacefulness and holiness. The color palette used here is classic in Himalayan arts: black, red, and orange with two complementary shades of brownish green.

Predictably, you might think that I would choose my palette based on this timeless combination. Well, I did plan to do that… BUT! As it is such a recurring theme in Himalayan arts, I decided to pick something completely different – colors that are nowhere possible to show up in any Himalayan pieces in the reality. After a few experiments, I finally came up with the following purple-based palette with a flashy green highlight.

Now that I have a palette, I would need an image editor to help me automatically transform photographs I love into this anti-classic theme. So I made my own editor to do the works.

P5 Image Editor

The idea of this editor is basically to compare the source image’s color, pixel by pixel, to each color in the palette. The color distance algorithms can be used to determine which palette color is “closest” to a given image color. After that, the editor will replace the color of a source image’s pixel with its closest color in the palette.  Tow color distance calculation algorithms are used in my editor – Euclidean, and CIE76. Here is the URL to my editor: YG’s 5-Color Palette Image Editor.

Final Works

Here are the final six compositions of my palette.  They are famous graffiti walls I visited in six different cities across the world: Guangzhou, Hongkong, Shanghai, Beijing,  Penang, and New York.

—- “Paletted Walls” —-

Added a bonus yellow:

ICM – Week 2: Variables

In this week, we’re asked to do create a sketch that includes:

  • One element controlled by the mouse.
  • One element that changes over time, independently of the mouse.
  • One element that is different every time you run the sketch.
  • e.g. Try refactoring your Week 1 HW by removing all the hard-coded numbers except for createCanvas(). Have some of the elements follow the mouse. Have some move independently. Have some move at random.
  • e.g. Do the above but change color, alpha, and/or strokeWeight instead of position.
  • Or do something completely different!

—- What I Draw —-

The self portrait I drew on Week 1 was a drummer. Now, incorporating the elements mentioned above, I turned my self into a DJ drummer lol!

  • Use the mouse to control daytime mode or night-club mode
  • Make myself shining in various colors over time
  • Incorporate the Left Arrow and Right Arrow keys to play the drum
  • Make myself BURNing in RED when I drum crazily with the keyboard!!

—-Link to My DJ ver. Portrait —-

URL: https://editor.p5js.org/full/Sk8ONr1Km

Have Fun!