Light & Interactivity Week 1 – Emotional Fade

This week’s task is to create an interruptable fading LED, and I created a small device that uses fading LEDs to express different emotions in response to different kinetic stimuli – it’s called Pokemon Box Plus (beta).



Ideation

The concept of this piece is inspired by the Pokemon Ball Plus, a Nintendo Switch Gaming Peripheral that works together with the Let’s Go Pikachu/Eevee game. This ball can store data of a pokemon, and it will glow in several ways along with the pokemon’s growl in response to a player’s movement and actions taken on the joycon. After playing it for a while, I found that the number of glowing effects are rather limited, and does not contribute very well to the feeling of “a living pokemon dwelling in the ball”. So, I decided to make my own. Similarly, the pokemon living in my box will react using light, and depending on how a user move the device, it will respond in different ways.

Photo by Game Rant

Prototype

Given the limited time frame, I only made light responses in this beta version and left the sound part aside for the moment. The goal of this version is to use ONLY fading effects to represent, or at least, relate a user to, different kind of emotions. To testify the idea, I set three constraints:

  • Only use 1 color;
  • Multiple LEDs are OK. But they should act in accordance so that they will fade in / fade out at the same time with the same pattern;
  • The spatial layout of the LED(s) should be as simple as possible.

Based on these constraints, I chose two attributes of fade, Speed and Intensity, and picked three combinations to represent three types of emotions – Peaceful/Calm, Joy/Delight, and Excitement/Surprise, as seen below:

For the user input, since I’m creating a dwelling place for a minimized pokemon, I don’t want to use a hard button/a joycon to forcefully make it react. Instead, I decided to stick to the overall movement of the box in the 3D space, to create a effect that seems like “the pokemon senses how its house is moving and responds in different ways”.

To do this, I used an accelerometer to capture the movement of the box, and relate different moving patterns to the emotions indicated above:

  • Gentle, Small Range — Peaceful / Calm
  • Rhythmic, Medium Range — Joy / Delight
  • Free Fall — Excitement / Surprise

Based on the design above, several iterations are made to testify the optimal size of the box, type of diffuser to use, and how well it responses when it sits on a user’s hand.

Final Work

Here’s how it works in action:

Animation Week 6&7 – Augmented Reality


For week 6 & 7 of my animation class , I worked on an augmented reality project which servers as a counterpart of another project of mine, called the Invisible Bird. Together, they aim to expose the invisible thought cages we’ve built around ourselves, and to make people realize how long we’ve been trapped.

The Invisible Bird project allows people to guess whether Twitter’s attitude about a certain topic is positive or negative in general by asking them to feed an invisible bird representing a specific Twitter topic. This AR piece, on the other hand, does the opposite. It presents Twitter’s attitude about a hidden topic by lighting up an empty bird cage with green or red color. Then, to reveal what the topic is, the user will need to use AR to see what’s inside a “positive” or a “negative” cage.

If the cage is red, it means that their attitude about this topic is mostly negative; if the cage is green, it means that they’re mostly positive. In the video above, people’s attitude on Twitter about four topics are revealed as follow: positive about legalization of marijuana, negative about trade war, positive about death penalty, and finally, negative about gun control.

Building Process

I used A-Frame with ARJs to visualize the 3D assets about different topics, and tested it using target markers related to different topics:

Then I prepared assets using Google Poly and incorporated them into the cage using serial communication.

Future Works

The process of revealing AR objects through tiny physical devices is drawn from an art piece of animated people sleeping on different hand-sized beds, created by my instructor, Gabriel Barcia-Colombo. Future works of this project will be more about creating assets that are more straight forward, and explore how different made of cages can be used to convey a deeper connection between the 3D model and the cage itself.

Credits of 3D assets:

Google, VR XRTIST (XRTIST), Evol Love, Robert Mirabelle

PCOMP and ICM Final – Week 14

Based on the results of the user testing, we made several modifications to the design of our project:

  • Feedback: currently, the audio interface happens at the front, while the tangible interface happens at the back. This creates a discrepancy between the two interactions, and we need to find a way to bridge them.
  • Adjustments: to allow the shift of attention smoother for the users, we decided to push the cage backwards (making it closer to the projection plane), so that physically they are aligned at the same place.

 

  • Feedback: the audio interface of the first step gives a command, while the one for the second step gives a suggestion. 
  • Adjustment: since the audio interface is intentionally kept for possibilities of more complex/interesting audio inputs in the future, we decided keep it, and we adjusted the audio instructions of the second steps to be commands as well.

 

  • Feedback: since the projected tweets are in green and red color, and they’re projected on the stairs (which looks rectangular), they confused people with the “green & red boxes” where the actual interaction should happen.
  • Adjustment: we removed the green and red colors of the projected tweets, and only keep the colors for the food containers of the cage. In this way, users should be less likely to be confused about what the “green food container” and “red food container” are referring to.

 

We also considered about removing the stairs and projecting the tweets on the wall directly. In this setup, we’ll put the cage on a stand, and put 2D bird cutouts or 3D bird models onto the wall at both sides of the cage, and project the tweets onto the wall by each bird. This design can remove the length restrictions imposed by the stairs, and it can give a larger room to the cage and making it the prominent element for interaction. We’ll do a round of testing to find out whether this new design works better than the old one.

Animation Week 3 – Storyboarding

For the next animation project, I’ll be working with a fellow ITP student Xiaotong Ma to tell a story about a dinosaur. The dinosaur was born in, and later broke away from ITP. It escaped from the Tisch building, crashed its way through the New York City, and eventually took down the Statue of Liberty.

In telling this story, we are trying to mimic scenes from different video games, recreate them using photo/google earth images, incorporate them into the animation and make our dinosaur go through these scenes. Below are the storyboards of this story:

Notes: the shape with a D letter inside represents the main character –  the dinosaur – in our storyboards.

Board 1

  • Perspective: a top-down view from google earth, looking at the Tisch building from the sky
  • Content: a tiny dinosaur appears above the building.

 

Board 2

  • Perspective: same as board 1
  • Content: a zoom-in view to the dinosaur. It prepares for  a transition to board 3

 

Board 3

  • Perspective: first-person perspective, horizontal view looking at Gabe’s office
  • Content: the first game we’re planning to use is Pokemon Go. So it will be catching the dinosaur in front of Gabe’s office. The background will be a picture of the office door, and there will be a hand-held cellphone with a mock-up of Pokemon Go game.

 

 

Board 4

  • Perspective: same as 3
  • Content: the player will try to catch the dinosaur using a Poke Ball. But it fails, and the dinosaur runs away.

 

Board 5

  • Perspective: first-person,  horizontally looking at a window inside ITP
  • Content: Now the dinosaur will try to escape. It will break through a window, and jump to the outside.

 

Board 6

  • Perspective: top down view, looking from the sky just as in Google Earth
  • Content: the dinosaur will try to run away from the building. It will first fall to the street, and then attempts to go across the street. Now the game of Frogger begins.

 

Board 7

  • Perspective: same as 6
  • Content: the dinosaur will be try to cross the street and fails a couple of times.

 

Board 8

  • Perspective: same as 7
  • Content: The dinosaur eventually succeeded. It then exists the scene by moving to the right

 

Board 9

  • Perspective: first person, horizontal view looking at the arch of Washington Square Park
  • Content: the dinosaur enter the scene from the left.

 

Board 10

  • Perspective: same as 9
  • Content: Now the dinosaur enters the game of Mario Brothers.  It will move under the arch, jump and hit it from below. A mushroom will appear, and the dinosaur will jump to eat the mushroom.

 

Board 11

  • Perspective: same as 10
  • Content: the dinosaur will be come very big. And it moves to the right and exit this scene.

 

Board 12

  • Perspective:top down, looking from the sky as in Google Earth
  • Content: the dinosaur now is moving away from the Manhattan island and towards the Statue of Liberty. It arrives at the riverside from the left, and jump into the water at the right.

 

Board 13

  • Perspective: same as 12
  • Content: the dinosaur is moving in the water and crashed a few boats.

 

Board 14

  • Perspective: third-person perspective,  looking through the back of the dinosaur
  • Content: the dinosaur appears from the bottom left of the scene, and moving towards the Stature of Liberty.

 

Board 15

  • Perspective: third-person, horizontally view of the dinosaur and the statue
  • Content: Now the dinosaur enters the final game, the Street Fighter game. Its battle with the statue will begin.

 

Board 16

  • Perspective: same as 15, zoom in to the statue a little bit
  • Content: the statue will throw the torch towards the dinosaur to attack it

 

 

Board 17

  • Perspective: same as 15
  • Content: the torch will hit the dinosaur, and do a very tiny damage to the HP of the dinosaur. Then the torch bounces back to the feet of the statue.

 

Board 18

  • Perspective: same as 17
  • Content: the dinosaur will retaliate by spitting tons of fire.

 

Board 19

  • Perspective: same as 18
  • Content: the statue is burnt. It loses all HP and turns into dust, and disappear. The dinosaur wins!

 

Board 20

  • Perspective: third-person perspective, looking through the back of the dinosaur
  • Content: the dinosaur moves toward the base of the statue, and picks up the torch.

 

Board 21

  • Perspective: third-person perspective, looking at the dinosaur and zoom out
  • Content: the dinosaur becomes the new landmark of New York! THE END.

PCOMP and ICM Final – Version 3

After collecting feedbacks for the second version of the bird cage device, we discovered the following issues regarding its design:

  1. The interaction between the users and the project is flat, as the device only takes in one user input (which is assigning topics to the cages) and respond only once, regardless of how fancy the response looks.
  2. The previous design makes a metaphor between tweets from Tweeter and the shape of bird cage, which make some sense. But this is mostly a connection based on the visual aspects of the two, rather than the tangible aspects – in other words, how a user is actually interacting with the physical form of a bird cage, the space that the cage occupies, and the bird inside the cage, etc., is not considered throughly enough.

After some discussion with Nick during the weekend, we decided to throw away the idea to make a physical representation of Twitter API by weighting how many people tweets on two given topics, and switch to focus on one of the actions that are most important for a person to own a bird cage – to put a bird into the cage. Based on this, we made a cardboard prototype to simulate a bird luring process. By throwing in hashtags of a topic on Twitter, we will lure an invisible bird that represents the Twitter data on this topic into the bird cage.

 

Following this design and a quick feedback from Tom, we further discussed what are the possible interactions that can both take advantage of people’s bird keeping behavior, and at the same time connects it to some intellectual activities that cannot be achieved by merely keeping a physical bird from the nature. I realized that since the bird is a representation of Twitter data, it is also a representation of the public’s opinion of a topic.

In the real world, by feeding the bird with some food and observing how the bird react to that food, we can know whether the bird likes it or not. In the same sense, if we feed the public data bird with a particular kind of thinking, by observing the bird’s reaction, we can have a physical glimpse into the public’s opinion or attitudes towards that thinking.

Further, I realized that this provides an opportunity for us to measure or rethink how our perceptions of social phenomenon are different from the larger public. Living in a physical environment, the opinions we possess are inevitably influenced by the people we engage with on a daily basis. For instance, the result of the recent midterm election in New York turned out to be quite different from a lot of people’s prediction at NYU. As someone who lives in a liberal community inside a liberal state, it is not uncommon that she/he’s gauging of the public’s attitude is somewhat biased. And this provides a good opportunity for us to reveal this bias through our feeding action with a public data bird.

So for version 3, we’re planning to create an invisible bird that allows users to guess and reflect on the guessing about public’s attitudes towards controversial topics on social media, as shown below:

PComp and ICM Final – Version 2

After collecting feedbacks from the lovely ITP community, I realized that the previous design about aligning AR with the physical world is a bit farfetched, in terms of the following aspects:

  1.  The starting point of the project is not self-explanatory enough. As urban people nowadays rarely play with sand, the “grab-wand-and-draw-pattern” action will need to be explicitly instructed, or printed out on the  project itself. And this somewhat makes the project not intuitive.
  2. The physical feedback of actions occurring in the AR world are missing, since the battle is visible only in the AR world. And the audience wouldn’t notice the changes in the physical world unless the monster is defeated in AR and then transforming into a physical smiling face. This somehow gives a feeling of “using AR for AR’s sake”. It is likely that the audience will merely sees it as an enhanced AR game.
  3. Since the audience’s commands are derived from continuous drawings on the sand, it requires them to clean up the previous pattern before a new one is drawn. The cleaning process can be a bit cumbersome, and the project does not provide a feedback of a “clean and ready” status. Also, the timing of cleaning and drawing can be tricky, since everyone may do it differently.

Given the above comments, I realized that emphasizing on the “interaction between moving AR objects and physical objects” is perhaps not a very good idea, or at least, it requires other forms of designs to make them working together really well.

After a discussion with my ITP classmate Nick Tanic, I realized that instead of focusing on the “movement relationship” between AR and physical world, maybe focusing on their differences in “visibility” is a better idea. As we know, physical objects are present all the time, while for AR objects, we still need an interface or a medium, like an AR-enabled phone, to actually see them. Since Nick is having an idea about using bird cages to visualize tweets, it rang my bell as it could become a wonderful stage for playing with this difference in visibility. So, we decided to collaborate on our finals, and. here comes the design: an AR-enabled bird cage system that visualize the quantitative differences in any given two topics/items/phenomenon and reveals trends of people’s topic selection.

 

Major Functional Elements

The concept of this installation originates from a idea about making a tool to measure social phenomenon. It will be comprised of four major parts:

  • a chandelier-like hanging structure
  • two empty bird cages that can move up and down
  • an application that is connected to the Twitter API and instructs the installation to move one bird cage up and the other down, behind the scene
  • an application that reveals what’s inside the two bird cages by using AR

 

How it works

The bird cage installation is made to measure social trends on twitter. The audience can come up with two hashtags they want to compare, (e.g. pizza vs burger, or Trump vs Hillary), and assign these hashtags to different cages. The up and down movements of the two bird cages will be synchronized with number of people who tweet about each topic. To see what’s being compared visually, the audience will pull out their phones and enable AR to see what’re inside the birdcages and being compared.

Since the audience will be doing two tasks: the topic assignment and the AR observation, it’ll be boring or less surprising for them to perform these two tasks in this order, since the assignment process somewhat gives away the AR contents. On the other hand, it will be interesting to reverse the sequence, and interpret it under a larger audience context. A possible workflow will be:

  1. We pick two topics and assign them to the bird cages;
  2. The cages move accordingly;
  3. Audience A comes to the bird cages, and see two imbalanced cages;
  4. Audience A pulls out his/her phone, and discover what are really being compared in AR;
  5. Audience A assign two new topics, and leave;
  6. The cages move accordingly;
  7. Audience B comes to the bird cages, and see two imbalanced cages;
  8. So on so forth.

And this creates an interaction among the installation and the audience flow over time.