ICM – Week 1: How Computation Applies to..

Well, this seems like a big topic at first glance, especially when computational power is ubiquitous nowadays and I am so easily attracted to new stuffs. Two things, however, rang my bell immediately, and they’re in fact part of the driving forces that have brought me to ITP.

 

AI drumming

(credit: http://www.melbournedjembe.com.au)

Being a drummer in West Africa is not easy. Unlike most modern performing art forms, where a dancer choreographs over the music, the style in West Africa is done somewhat reversely – a dancer improvises on the movements, and a drummer watches carefully and interprets the grooves out of those movements.

As someone who specializes at this kind of drumming – a djembe player, to be more specific – but isn’t equivocally adept at dance movements, I wish there is a sort of intelligence available that can help me unlock the dancers’ secrets.

And this is where computation comes into place. With Artificial Intelligence, we should be able to analyze the correspondence between the visual patterns in classic dance video footage and the underlying musical rhythms performed in those videos, by the most skillful and experienced drummers. Perhaps AI can also do the drumming as well, automatically producing the rhythms upon observation of human movements just like a master drummer.

 

Performing the Space

(credit: church stage design ideas)

Another interest of mine is to figure out how the 3D spaces we’re in during live performances can be transformed into part of the performing experience.

Up till today, artists still perform primarily on a stage – that is, for a typical 4 pieces band, they play music within a 20′ x 12′ box. Theaters offers bigger stages for artists to sing, dance, and occasionally, interact with the people sitting on the front or aisle seats. But the biggest proportion of that space, the audience’s space, is mostly unused (except for offering a place to seat). Immersive theater offers a full experience, but that space is tailored, somewhat fixed, and takes $$$ to build.

With abundant computational power, the ability to project augmented layers of information onto the actual environment, and with IOT enabling objects around us to know who we are and what we’re doing, it is possible that the untapped space aforementioned can be turned into part of the performance as well. Bearing no physical forms, virtual objects and structures can exist in any part of the performing space. What will a performance taking advantage of this look like? I’m eager to find out.

ICM – Week 1: Self Portrait

Introduction

The first week’s work for ICM is to code a self portrait with P5.js. Since drawing by calibrating the position of each pixel on the canvas is somewhat painful for me, I decided to look for an alternative way that can take both the advantage of the comfort and ease provided by hand sketches, and the ability to manipulate visual elements dynamically by coding.

Inspiration

(photo credit: kiritani 88)

A couple weeks ago, I saw by an art work created by Hirohiko Araki, which “Jojo-colored” a wall in shinjuku, Tokyo as shown below. (Jojo is the main character of JJBA, JoJo’s Bizarre Adventure).

I was deeply impressed, and it gave me the idea to create lineal figure images by re-organizing duplicated elements. So, if I have a simple element, and I have a image to show me where should I put those elements, I should be able to code a human’s portrait by mapping the elements onto the canvas according to a portrait image! Here’s how I started:

 

Preparation of Portrait Image

In order to provide an image that is easy for P5 (and me) to interpret the locations of each elements, I need to create something like a black and white version of the portrait image – the black color indicating something should be drawn on this pixel, and the color indicating nothing should be drawn. Here’s how I turned a portrait of me (Thanks Kiki!) into this “duo-color drawing map” using image thresholding.

 

Draw the Element

To make this a “self” portrait, I looked for something from myself – the abbreviation of my first name, YG – and turned it into a visual element using P5 primitive shapes.

 

Load the image into P5 and Set Up Units

Now I have the image map and the basic element, I can put them together! With the loadImage() API, I loaded the image into P5, and used loadPixels() to get the pixel array of this image. What’s left is simply to read each pixel to see if it is black and an element should be drawn at the same position on a new canvas, or it is white, meaning nothing needs to be drawn. Another thing is that on the new canvas, a pixel is too small to hold my element. So I need to scale it up a bit by using a “unit” setting, which means instead of squeezing the element into 1 pixel, I’m now drawing it onto a “unit * unit” box, and treat each box as an pixel from the source image.

 

Final Work and Reflections..

After some final tweaks on the stroke weight and color, this is how my self portrait looks like. I know’s it’s still far from Hirohiko’s Jojo Wall, but given the limited time frame, I’m happy with the result. And most importantly, the creation process is fun! More improvements can be done regarding the choice of element (the choice of Hirohiko is the “Do” character in Japanese, which is sort of triangular and good for making up images), the arrangement of elements (taking advantage of the linear property and orientation of the “Do” character to create image), and of course, the visual design of the entire art work.

I believe AI is also be capable of turning linear images into similar styles. I’ll explore more on this in the upcoming days at ITP.