PCOMP and ICM Final – Week 14

Based on the results of the user testing, we made several modifications to the design of our project:

  • Feedback: currently, the audio interface happens at the front, while the tangible interface happens at the back. This creates a discrepancy between the two interactions, and we need to find a way to bridge them.
  • Adjustments: to allow the shift of attention smoother for the users, we decided to push the cage backwards (making it closer to the projection plane), so that physically they are aligned at the same place.


  • Feedback: the audio interface of the first step gives a command, while the one for the second step gives a suggestion. 
  • Adjustment: since the audio interface is intentionally kept for possibilities of more complex/interesting audio inputs in the future, we decided keep it, and we adjusted the audio instructions of the second steps to be commands as well.


  • Feedback: since the projected tweets are in green and red color, and they’re projected on the stairs (which looks rectangular), they confused people with the “green & red boxes” where the actual interaction should happen.
  • Adjustment: we removed the green and red colors of the projected tweets, and only keep the colors for the food containers of the cage. In this way, users should be less likely to be confused about what the “green food container” and “red food container” are referring to.


We also considered about removing the stairs and projecting the tweets on the wall directly. In this setup, we’ll put the cage on a stand, and put 2D bird cutouts or 3D bird models onto the wall at both sides of the cage, and project the tweets onto the wall by each bird. This design can remove the length restrictions imposed by the stairs, and it can give a larger room to the cage and making it the prominent element for interaction. We’ll do a round of testing to find out whether this new design works better than the old one.

ICM Week 5: Arrays

This week I made a motion tracking Match Man that can track your pose in front of a camera and turn it into a “stop-motion style” animation. This should be an ideal tool for any Kong Fu practitioner who wants to learn the Sonic Punch from the master but is too shy to ask in person. 😀

Here is what it feels like in Super Slow Motion mode:

Here’re the movements:

And here’s the guy who slowly did the Sonic Punch…

This project is built on top of a machine learning library called ML5. It is a super friendly JavaScript library which empowers anyone with even just a beginner-level coding skills to take advantage of the mighty AI and to do all kinds of stuff, from image classification to pitch detection. In this project, I used a model called PoseNet that can track the movement of a human body in front of an everyday web camera.

I also cooperated with Ellie Lin to create a Bread Man that can turn yourself into a piece of moving bread.  Check it out and have fun!