Textile Interface – Fall 2019

This is a documentation of Lab 1, Lab 2 and the Final Project of Fall 2019 Textile Interface class.

Lab 1 – 3 kinds of digital switches

1. Bridge Switch

2. Other Kind of Fabric Switch

3. Switch with a Different Material

To testify idea for my umbrella NIME piece, I made a plastic sheet fabric with conductive thread to testify the idea of using water as a random switch. It turned out that soap water can be used as a conductive material. However, since the thread created holes that leaked, the water eventually created a short circuit. So although it works, it might not be a very good idea for the final design of the umbrella NIME piece.

Lab 2 – Velostat sensor

Final Project

For my final project, I want to make a midi control for one of the instruments that will be used in my NIME piece.

Based on the previous experiments, using conductive thread patterns on the umbrella surface as digital switches does not seems to be very feasible. So I eventually made a textile interface that uses a velostat to control filters of an electronic instrument around the handle area.

As the performer squeeze the handle, it’ll change the filter cutoff of the corresponding midi instrument in Ableton through a wireless-connected Arduino Nano.

P.S. Thank you so much Kate for this wonderful class!

PCOMP and ICM Final – Week 14

Based on the results of the user testing, we made several modifications to the design of our project:

  • Feedback: currently, the audio interface happens at the front, while the tangible interface happens at the back. This creates a discrepancy between the two interactions, and we need to find a way to bridge them.
  • Adjustments: to allow the shift of attention smoother for the users, we decided to push the cage backwards (making it closer to the projection plane), so that physically they are aligned at the same place.

 

  • Feedback: the audio interface of the first step gives a command, while the one for the second step gives a suggestion. 
  • Adjustment: since the audio interface is intentionally kept for possibilities of more complex/interesting audio inputs in the future, we decided keep it, and we adjusted the audio instructions of the second steps to be commands as well.

 

  • Feedback: since the projected tweets are in green and red color, and they’re projected on the stairs (which looks rectangular), they confused people with the “green & red boxes” where the actual interaction should happen.
  • Adjustment: we removed the green and red colors of the projected tweets, and only keep the colors for the food containers of the cage. In this way, users should be less likely to be confused about what the “green food container” and “red food container” are referring to.

 

We also considered about removing the stairs and projecting the tweets on the wall directly. In this setup, we’ll put the cage on a stand, and put 2D bird cutouts or 3D bird models onto the wall at both sides of the cage, and project the tweets onto the wall by each bird. This design can remove the length restrictions imposed by the stairs, and it can give a larger room to the cage and making it the prominent element for interaction. We’ll do a round of testing to find out whether this new design works better than the old one.

PCOMP and ICM Final – Version 3

After collecting feedbacks for the second version of the bird cage device, we discovered the following issues regarding its design:

  1. The interaction between the users and the project is flat, as the device only takes in one user input (which is assigning topics to the cages) and respond only once, regardless of how fancy the response looks.
  2. The previous design makes a metaphor between tweets from Tweeter and the shape of bird cage, which make some sense. But this is mostly a connection based on the visual aspects of the two, rather than the tangible aspects – in other words, how a user is actually interacting with the physical form of a bird cage, the space that the cage occupies, and the bird inside the cage, etc., is not considered throughly enough.

After some discussion with Nick during the weekend, we decided to throw away the idea to make a physical representation of Twitter API by weighting how many people tweets on two given topics, and switch to focus on one of the actions that are most important for a person to own a bird cage – to put a bird into the cage. Based on this, we made a cardboard prototype to simulate a bird luring process. By throwing in hashtags of a topic on Twitter, we will lure an invisible bird that represents the Twitter data on this topic into the bird cage.

 

Following this design and a quick feedback from Tom, we further discussed what are the possible interactions that can both take advantage of people’s bird keeping behavior, and at the same time connects it to some intellectual activities that cannot be achieved by merely keeping a physical bird from the nature. I realized that since the bird is a representation of Twitter data, it is also a representation of the public’s opinion of a topic.

In the real world, by feeding the bird with some food and observing how the bird react to that food, we can know whether the bird likes it or not. In the same sense, if we feed the public data bird with a particular kind of thinking, by observing the bird’s reaction, we can have a physical glimpse into the public’s opinion or attitudes towards that thinking.

Further, I realized that this provides an opportunity for us to measure or rethink how our perceptions of social phenomenon are different from the larger public. Living in a physical environment, the opinions we possess are inevitably influenced by the people we engage with on a daily basis. For instance, the result of the recent midterm election in New York turned out to be quite different from a lot of people’s prediction at NYU. As someone who lives in a liberal community inside a liberal state, it is not uncommon that she/he’s gauging of the public’s attitude is somewhat biased. And this provides a good opportunity for us to reveal this bias through our feeding action with a public data bird.

So for version 3, we’re planning to create an invisible bird that allows users to guess and reflect on the guessing about public’s attitudes towards controversial topics on social media, as shown below:

PComp and ICM Final – Version 2

After collecting feedbacks from the lovely ITP community, I realized that the previous design about aligning AR with the physical world is a bit farfetched, in terms of the following aspects:

  1.  The starting point of the project is not self-explanatory enough. As urban people nowadays rarely play with sand, the “grab-wand-and-draw-pattern” action will need to be explicitly instructed, or printed out on the  project itself. And this somewhat makes the project not intuitive.
  2. The physical feedback of actions occurring in the AR world are missing, since the battle is visible only in the AR world. And the audience wouldn’t notice the changes in the physical world unless the monster is defeated in AR and then transforming into a physical smiling face. This somehow gives a feeling of “using AR for AR’s sake”. It is likely that the audience will merely sees it as an enhanced AR game.
  3. Since the audience’s commands are derived from continuous drawings on the sand, it requires them to clean up the previous pattern before a new one is drawn. The cleaning process can be a bit cumbersome, and the project does not provide a feedback of a “clean and ready” status. Also, the timing of cleaning and drawing can be tricky, since everyone may do it differently.

Given the above comments, I realized that emphasizing on the “interaction between moving AR objects and physical objects” is perhaps not a very good idea, or at least, it requires other forms of designs to make them working together really well.

After a discussion with my ITP classmate Nick Tanic, I realized that instead of focusing on the “movement relationship” between AR and physical world, maybe focusing on their differences in “visibility” is a better idea. As we know, physical objects are present all the time, while for AR objects, we still need an interface or a medium, like an AR-enabled phone, to actually see them. Since Nick is having an idea about using bird cages to visualize tweets, it rang my bell as it could become a wonderful stage for playing with this difference in visibility. So, we decided to collaborate on our finals, and. here comes the design: an AR-enabled bird cage system that visualize the quantitative differences in any given two topics/items/phenomenon and reveals trends of people’s topic selection.

 

Major Functional Elements

The concept of this installation originates from a idea about making a tool to measure social phenomenon. It will be comprised of four major parts:

  • a chandelier-like hanging structure
  • two empty bird cages that can move up and down
  • an application that is connected to the Twitter API and instructs the installation to move one bird cage up and the other down, behind the scene
  • an application that reveals what’s inside the two bird cages by using AR

 

How it works

The bird cage installation is made to measure social trends on twitter. The audience can come up with two hashtags they want to compare, (e.g. pizza vs burger, or Trump vs Hillary), and assign these hashtags to different cages. The up and down movements of the two bird cages will be synchronized with number of people who tweet about each topic. To see what’s being compared visually, the audience will pull out their phones and enable AR to see what’re inside the birdcages and being compared.

Since the audience will be doing two tasks: the topic assignment and the AR observation, it’ll be boring or less surprising for them to perform these two tasks in this order, since the assignment process somewhat gives away the AR contents. On the other hand, it will be interesting to reverse the sequence, and interpret it under a larger audience context. A possible workflow will be:

  1. We pick two topics and assign them to the bird cages;
  2. The cages move accordingly;
  3. Audience A comes to the bird cages, and see two imbalanced cages;
  4. Audience A pulls out his/her phone, and discover what are really being compared in AR;
  5. Audience A assign two new topics, and leave;
  6. The cages move accordingly;
  7. Audience B comes to the bird cages, and see two imbalanced cages;
  8. So on so forth.

And this creates an interaction among the installation and the audience flow over time.

 

PComp Final: Sand God – Ideation

In my PComp final project, I wish to explore the following concepts:

  • How would AR impact the physical world?
  • How we can combine AR and AI to tell a compelling story?

In responding to these two concepts, I’m planning to create an interactive installation that allows the audience to defeat a physically presented monster by summoning an mask god existing in the AR world. The
audience will use drawings on the sand to summon and make attack / defend commands to the AR god, and apply damages to the monster. When the monster is eventually defeated after multiple successful attacks, the monster in the physical world will turn to a smiling face.

Below are the storyboard of the interactions between the audience and the
installation:

1. In the initial status, the audience will be facing an installation put on the table.

In the front, there will be the summoning stage: a big circular area
covered with sand, and a smaller circular area below it. The best bigger circle will be decorated with several tiny human figures holding their hands up, signifying it is a place to summon something. The smaller circle is painted with with mythical symbols, signifying it is a place to trigger the
summoning action.

On the right of the circle, there will be a pen-sized stick that looks like a wand of a wizard, sticking onto the installation. And this will be the audience’s tool to draw patterns onto the sand.

On the back of the installation, there will be wood pieces carved into the shape of a monster. And together, they set the stage of a fight between a tribe and the monster.

2. To begin, the audience will draw a pattern on the sand. There will be carved patterns (or maybe with instruction texts) on the installation to tell
audience what patterns to draw at the beginning. To draw the patter, the audience will pick up and hold the wand, and use the end of the wand to draw a pattern onto the sand. When the pattern is finished, the audience will hit the smaller circle with the wand, to signal a completed pattern.

3. A hidden camera will be used to take a picture of the pattern, and analyze it. If it matches the designed pattern to summon a god, a god will show up in the AR application.

4. To continue the fight to defeat the monster, the audience will continue to draw other patterns provided on the installation.

5. There will be mainly two categories of patterns – one for attack commands, and one for defense commands. Ideally, when the command is correct, the audience will be able to see the action of the god (to attack/defense) in the AR. The AR is also timed to draw tiny monsters to attack the summoned god.

And on the AR app, there will be statics showing the life of the god (which will decrease upon the monster’s attack), and the life of the monster. The
audience will continue to do so until the monster eventually defeated.

6. When finally the monster is defeated in AR, the monster on the installation will alter shape and change to a smiling face, meaning that the battle is over, and peace has arrived.

PCOMP Midterm – Part 2

This is the second post about my PCOMP Midterm Project – a ghost in a museum that will pop up if anyone walks closer to it, and subsequently tracking the motion of the person. To check with the first post about the ideation process and how the circuit is made, you can check here.

 

Room Setup

To find out how we should set up the projection at the Blue Room in Graduate Musical Theater Writing Program and make it work for the event, we went to the actual site and did a few rounds of testings to find out how we should position everything.

Things worth noting are that:

  1. When we’re using a room as a part of our project, the spatial layout of things in the room matters. For this project, it includes the size of the walls, the position of tables/chairs, the path that is created for the audience to navigate inside the room, and even the positions of the outlets (so that we can plug the projector and laptop to make the project works).
  2. The aesthetics of the project should go along with the environment in the room. In this case, since we’re creating for a museum of the 50s, we should make our projection feels like something from that time. For this, we’d changed the projected image and back ground for several times to make it really fit into the room, and we eventually decided to remove the LEDs in the eye artifact and put the sensors with a creepy feeling portrait instead.
  3. To create surprises (since we need to scare people for Halloween), we should projects in ways that are different and less common than people experience in everyday life. For instance we can alter the projection’s color (use black light to intentionally make the projection much less visible when we don’t want it), orientation (project it on people’s sides instead of right in front of them), and shape (map projections onto physical objects in the room that people would not notice at first glance).
  4. Taking advantage of sound in a room. It turned out to be much unexpected and creepier when I played the background sound from the other end of the room via a networked computer, instead of from the laptop right in front of the audience. According to the audience, it really felt like the room was occupied by the ghost.

 

Final Work

This is how our final project looks on the day of the Halloween event!

—- Projected Ghost Design—-

—- Triggering Artifacts (the portrait) —-

—-Projection on the Wall—-

PCOMP Midterm – Part 1

For the PComp midterm, I designed a haunted museum artifact for the Halloween Event hosted by the Graduate Music Theater Writing Program at Tisch. This is the first post of the project, and you can check with the finished project here.

Background

Our initial idea was to create a ghost on the wall that will surprise the museum visitors, and make the museum gourds talk upon visitors’ reaction. After further discussion with Briana, our coordinator from GMTWP, another team will be responsible for making the talking gourds, and we’ll focus on the wall ghost, and create other artifacts that can work with the ghost.

Since we’re using projections to make a ghost in the room, the room itself become the affordance of our project. It is both fun and mind-torturing to test out different combinations of item positioning to really take advantage of the space in the room. And this is very different from what I’ve been working on so far at ITP, where the scope of the projects is mostly constrained on a single piece of work sitting on the table.

Circuit Building

To make a projection that can will pop up unexpectedly,  we made a simple circuit includes a distance sensor that will send a triggering single to the laptop if someone places an object close enough to the sensor. Parallel to the sensor, we also put four groups of LEDs in series, in order to create an evil blinking eye artifact that will be triggered once the ghost is projected onto the wall.

Below are pictures and videos showing the building process of this artifact – the Eye of Cthulhu, from prototype to finish.