Well, this seems like a big topic at first glance, especially when computational power is ubiquitous nowadays and I am so easily attracted to new stuffs. Two things, however, rang my bell immediately, and they’re in fact part of the driving forces that have brought me to ITP.
Being a drummer in West Africa is not easy. Unlike most modern performing art forms, where a dancer choreographs over the music, the style in West Africa is done somewhat reversely – a dancer improvises on the movements, and a drummer watches carefully and interprets the grooves out of those movements.
As someone who specializes at this kind of drumming – a djembe player, to be more specific – but isn’t equivocally adept at dance movements, I wish there is a sort of intelligence available that can help me unlock the dancers’ secrets.
And this is where computation comes into place. With Artificial Intelligence, we should be able to analyze the correspondence between the visual patterns in classic dance video footage and the underlying musical rhythms performed in those videos, by the most skillful and experienced drummers. Perhaps AI can also do the drumming as well, automatically producing the rhythms upon observation of human movements just like a master drummer.
Performing the Space
(credit: church stage design ideas)
Another interest of mine is to figure out how the 3D spaces we’re in during live performances can be transformed into part of the performing experience.
Up till today, artists still perform primarily on a stage – that is, for a typical 4 pieces band, they play music within a 20′ x 12′ box. Theaters offers bigger stages for artists to sing, dance, and occasionally, interact with the people sitting on the front or aisle seats. But the biggest proportion of that space, the audience’s space, is mostly unused (except for offering a place to seat). Immersive theater offers a full experience, but that space is tailored, somewhat fixed, and takes $$$ to build.
With abundant computational power, the ability to project augmented layers of information onto the actual environment, and with IOT enabling objects around us to know who we are and what we’re doing, it is possible that the untapped space aforementioned can be turned into part of the performance as well. Bearing no physical forms, virtual objects and structures can exist in any part of the performing space. What will a performance taking advantage of this look like? I’m eager to find out.