Animation Week 6&7 – Augmented Reality


For week 6 & 7 of my animation class , I worked on an augmented reality project which servers as a counterpart of another project of mine, called the Invisible Bird. Together, they aim to expose the invisible thought cages we’ve built around ourselves, and to make people realize how long we’ve been trapped.

The Invisible Bird project allows people to guess whether Twitter’s attitude about a certain topic is positive or negative in general by asking them to feed an invisible bird representing a specific Twitter topic. This AR piece, on the other hand, does the opposite. It presents Twitter’s attitude about a hidden topic by lighting up an empty bird cage with green or red color. Then, to reveal what the topic is, the user will need to use AR to see what’s inside a “positive” or a “negative” cage.

If the cage is red, it means that their attitude about this topic is mostly negative; if the cage is green, it means that they’re mostly positive. In the video above, people’s attitude on Twitter about four topics are revealed as follow: positive about legalization of marijuana, negative about trade war, positive about death penalty, and finally, negative about gun control.

Building Process

I used A-Frame with ARJs to visualize the 3D assets about different topics, and tested it using target markers related to different topics:

Then I prepared assets using Google Poly and incorporated them into the cage using serial communication.

Future Works

The process of revealing AR objects through tiny physical devices is drawn from an art piece of animated people sleeping on different hand-sized beds, created by my instructor, Gabriel Barcia-Colombo. Future works of this project will be more about creating assets that are more straight forward, and explore how different made of cages can be used to convey a deeper connection between the 3D model and the cage itself.

Credits of 3D assets:

Google, VR XRTIST (XRTIST), Evol Love, Robert Mirabelle

PCOMP and ICM Final – Week 14

Based on the results of the user testing, we made several modifications to the design of our project:

  • Feedback: currently, the audio interface happens at the front, while the tangible interface happens at the back. This creates a discrepancy between the two interactions, and we need to find a way to bridge them.
  • Adjustments: to allow the shift of attention smoother for the users, we decided to push the cage backwards (making it closer to the projection plane), so that physically they are aligned at the same place.

 

  • Feedback: the audio interface of the first step gives a command, while the one for the second step gives a suggestion. 
  • Adjustment: since the audio interface is intentionally kept for possibilities of more complex/interesting audio inputs in the future, we decided keep it, and we adjusted the audio instructions of the second steps to be commands as well.

 

  • Feedback: since the projected tweets are in green and red color, and they’re projected on the stairs (which looks rectangular), they confused people with the “green & red boxes” where the actual interaction should happen.
  • Adjustment: we removed the green and red colors of the projected tweets, and only keep the colors for the food containers of the cage. In this way, users should be less likely to be confused about what the “green food container” and “red food container” are referring to.

 

We also considered about removing the stairs and projecting the tweets on the wall directly. In this setup, we’ll put the cage on a stand, and put 2D bird cutouts or 3D bird models onto the wall at both sides of the cage, and project the tweets onto the wall by each bird. This design can remove the length restrictions imposed by the stairs, and it can give a larger room to the cage and making it the prominent element for interaction. We’ll do a round of testing to find out whether this new design works better than the old one.