Animation Week 3 – Storyboarding

For the next animation project, I’ll be working with a fellow ITP student Xiaotong Ma to tell a story about a dinosaur. The dinosaur was born in, and later broke away from ITP. It escaped from the Tisch building, crashed its way through the New York City, and eventually took down the Statue of Liberty.

In telling this story, we are trying to mimic scenes from different video games, recreate them using photo/google earth images, incorporate them into the animation and make our dinosaur go through these scenes. Below are the storyboards of this story:

Notes: the shape with a D letter inside represents the main character –  the dinosaur – in our storyboards.

Board 1

  • Perspective: a top-down view from google earth, looking at the Tisch building from the sky
  • Content: a tiny dinosaur appears above the building.

 

Board 2

  • Perspective: same as board 1
  • Content: a zoom-in view to the dinosaur. It prepares for  a transition to board 3

 

Board 3

  • Perspective: first-person perspective, horizontal view looking at Gabe’s office
  • Content: the first game we’re planning to use is Pokemon Go. So it will be catching the dinosaur in front of Gabe’s office. The background will be a picture of the office door, and there will be a hand-held cellphone with a mock-up of Pokemon Go game.

 

 

Board 4

  • Perspective: same as 3
  • Content: the player will try to catch the dinosaur using a Poke Ball. But it fails, and the dinosaur runs away.

 

Board 5

  • Perspective: first-person,  horizontally looking at a window inside ITP
  • Content: Now the dinosaur will try to escape. It will break through a window, and jump to the outside.

 

Board 6

  • Perspective: top down view, looking from the sky just as in Google Earth
  • Content: the dinosaur will try to run away from the building. It will first fall to the street, and then attempts to go across the street. Now the game of Frogger begins.

 

Board 7

  • Perspective: same as 6
  • Content: the dinosaur will be try to cross the street and fails a couple of times.

 

Board 8

  • Perspective: same as 7
  • Content: The dinosaur eventually succeeded. It then exists the scene by moving to the right

 

Board 9

  • Perspective: first person, horizontal view looking at the arch of Washington Square Park
  • Content: the dinosaur enter the scene from the left.

 

Board 10

  • Perspective: same as 9
  • Content: Now the dinosaur enters the game of Mario Brothers.  It will move under the arch, jump and hit it from below. A mushroom will appear, and the dinosaur will jump to eat the mushroom.

 

Board 11

  • Perspective: same as 10
  • Content: the dinosaur will be come very big. And it moves to the right and exit this scene.

 

Board 12

  • Perspective:top down, looking from the sky as in Google Earth
  • Content: the dinosaur now is moving away from the Manhattan island and towards the Statue of Liberty. It arrives at the riverside from the left, and jump into the water at the right.

 

Board 13

  • Perspective: same as 12
  • Content: the dinosaur is moving in the water and crashed a few boats.

 

Board 14

  • Perspective: third-person perspective,  looking through the back of the dinosaur
  • Content: the dinosaur appears from the bottom left of the scene, and moving towards the Stature of Liberty.

 

Board 15

  • Perspective: third-person, horizontally view of the dinosaur and the statue
  • Content: Now the dinosaur enters the final game, the Street Fighter game. Its battle with the statue will begin.

 

Board 16

  • Perspective: same as 15, zoom in to the statue a little bit
  • Content: the statue will throw the torch towards the dinosaur to attack it

 

 

Board 17

  • Perspective: same as 15
  • Content: the torch will hit the dinosaur, and do a very tiny damage to the HP of the dinosaur. Then the torch bounces back to the feet of the statue.

 

Board 18

  • Perspective: same as 17
  • Content: the dinosaur will retaliate by spitting tons of fire.

 

Board 19

  • Perspective: same as 18
  • Content: the statue is burnt. It loses all HP and turns into dust, and disappear. The dinosaur wins!

 

Board 20

  • Perspective: third-person perspective, looking through the back of the dinosaur
  • Content: the dinosaur moves toward the base of the statue, and picks up the torch.

 

Board 21

  • Perspective: third-person perspective, looking at the dinosaur and zoom out
  • Content: the dinosaur becomes the new landmark of New York! THE END.

PCOMP and ICM Final – Version 3

After collecting feedbacks for the second version of the bird cage device, we discovered the following issues regarding its design:

  1. The interaction between the users and the project is flat, as the device only takes in one user input (which is assigning topics to the cages) and respond only once, regardless of how fancy the response looks.
  2. The previous design makes a metaphor between tweets from Tweeter and the shape of bird cage, which make some sense. But this is mostly a connection based on the visual aspects of the two, rather than the tangible aspects – in other words, how a user is actually interacting with the physical form of a bird cage, the space that the cage occupies, and the bird inside the cage, etc., is not considered throughly enough.

After some discussion with Nick during the weekend, we decided to throw away the idea to make a physical representation of Twitter API by weighting how many people tweets on two given topics, and switch to focus on one of the actions that are most important for a person to own a bird cage – to put a bird into the cage. Based on this, we made a cardboard prototype to simulate a bird luring process. By throwing in hashtags of a topic on Twitter, we will lure an invisible bird that represents the Twitter data on this topic into the bird cage.

 

Following this design and a quick feedback from Tom, we further discussed what are the possible interactions that can both take advantage of people’s bird keeping behavior, and at the same time connects it to some intellectual activities that cannot be achieved by merely keeping a physical bird from the nature. I realized that since the bird is a representation of Twitter data, it is also a representation of the public’s opinion of a topic.

In the real world, by feeding the bird with some food and observing how the bird react to that food, we can know whether the bird likes it or not. In the same sense, if we feed the public data bird with a particular kind of thinking, by observing the bird’s reaction, we can have a physical glimpse into the public’s opinion or attitudes towards that thinking.

Further, I realized that this provides an opportunity for us to measure or rethink how our perceptions of social phenomenon are different from the larger public. Living in a physical environment, the opinions we possess are inevitably influenced by the people we engage with on a daily basis. For instance, the result of the recent midterm election in New York turned out to be quite different from a lot of people’s prediction at NYU. As someone who lives in a liberal community inside a liberal state, it is not uncommon that she/he’s gauging of the public’s attitude is somewhat biased. And this provides a good opportunity for us to reveal this bias through our feeding action with a public data bird.

So for version 3, we’re planning to create an invisible bird that allows users to guess and reflect on the guessing about public’s attitudes towards controversial topics on social media, as shown below:

PComp and ICM Final – Version 2

After collecting feedbacks from the lovely ITP community, I realized that the previous design about aligning AR with the physical world is a bit farfetched, in terms of the following aspects:

  1.  The starting point of the project is not self-explanatory enough. As urban people nowadays rarely play with sand, the “grab-wand-and-draw-pattern” action will need to be explicitly instructed, or printed out on the  project itself. And this somewhat makes the project not intuitive.
  2. The physical feedback of actions occurring in the AR world are missing, since the battle is visible only in the AR world. And the audience wouldn’t notice the changes in the physical world unless the monster is defeated in AR and then transforming into a physical smiling face. This somehow gives a feeling of “using AR for AR’s sake”. It is likely that the audience will merely sees it as an enhanced AR game.
  3. Since the audience’s commands are derived from continuous drawings on the sand, it requires them to clean up the previous pattern before a new one is drawn. The cleaning process can be a bit cumbersome, and the project does not provide a feedback of a “clean and ready” status. Also, the timing of cleaning and drawing can be tricky, since everyone may do it differently.

Given the above comments, I realized that emphasizing on the “interaction between moving AR objects and physical objects” is perhaps not a very good idea, or at least, it requires other forms of designs to make them working together really well.

After a discussion with my ITP classmate Nick Tanic, I realized that instead of focusing on the “movement relationship” between AR and physical world, maybe focusing on their differences in “visibility” is a better idea. As we know, physical objects are present all the time, while for AR objects, we still need an interface or a medium, like an AR-enabled phone, to actually see them. Since Nick is having an idea about using bird cages to visualize tweets, it rang my bell as it could become a wonderful stage for playing with this difference in visibility. So, we decided to collaborate on our finals, and. here comes the design: an AR-enabled bird cage system that visualize the quantitative differences in any given two topics/items/phenomenon and reveals trends of people’s topic selection.

 

Major Functional Elements

The concept of this installation originates from a idea about making a tool to measure social phenomenon. It will be comprised of four major parts:

  • a chandelier-like hanging structure
  • two empty bird cages that can move up and down
  • an application that is connected to the Twitter API and instructs the installation to move one bird cage up and the other down, behind the scene
  • an application that reveals what’s inside the two bird cages by using AR

 

How it works

The bird cage installation is made to measure social trends on twitter. The audience can come up with two hashtags they want to compare, (e.g. pizza vs burger, or Trump vs Hillary), and assign these hashtags to different cages. The up and down movements of the two bird cages will be synchronized with number of people who tweet about each topic. To see what’s being compared visually, the audience will pull out their phones and enable AR to see what’re inside the birdcages and being compared.

Since the audience will be doing two tasks: the topic assignment and the AR observation, it’ll be boring or less surprising for them to perform these two tasks in this order, since the assignment process somewhat gives away the AR contents. On the other hand, it will be interesting to reverse the sequence, and interpret it under a larger audience context. A possible workflow will be:

  1. We pick two topics and assign them to the bird cages;
  2. The cages move accordingly;
  3. Audience A comes to the bird cages, and see two imbalanced cages;
  4. Audience A pulls out his/her phone, and discover what are really being compared in AR;
  5. Audience A assign two new topics, and leave;
  6. The cages move accordingly;
  7. Audience B comes to the bird cages, and see two imbalanced cages;
  8. So on so forth.

And this creates an interaction among the installation and the audience flow over time.