Sketch 2

This post is a summary and reflection of the concept, process, performance, and lessons learned for Sketch 2. We were asked to produce a 2 minute that highlights the technique we used. The intention, and result, is to develop a “sketch” performance and technology that “gets the idea across”. In the end, the performance went rather poorly for a number of reasons that may have been avoided if we had more time to iterate and develop the concept, technology, and performance.

Formative sketch from Catherine.

Formative sketch from Catherine.

Concept

The concept for this sketch is Alice in Wonderland and exploring the technique of forced perspective. We refer to the book by Lewis Carroll and the 1951 Disney cartoon. The part that we focus on is the part where Alice finds and falls into the rabbit hole. Our idea, which came from Catherine, who is generally interested in illusion, is to create an experience centering around forced perspective. We accomplish this illusion by using interactive media and projection mapping.

There are two phase in the concept. The finding and the falling phase. In the finding phase, we emphasize the forced perspective of a forest where it appears Alice is smaller when closer and larger when farther from the audience. Next, the falling phase shows Alice falling as a variety of alice in wonderland type objects float around her. In the concept items fall move faster or slower, depending on how close the performer is to the audience.

Our concept uses technology that we used in Sketch 1: Processing for visualization, a Kinect for sensing the performers position, and a projector. New technology and systems we use include the hanging canvases we constructed for projection mapping with MadMaper, and a plugin that uses the Syphon protocol for transferring what we rendered in processing to MadMapper.

Technology

Process

Our process, given the short time to work on this sketch, was to to discuss the concept, split up tasks, and work on. We developed and iterated the concept, then started working in parallel as possible.

We split up the roles for each member of the team. Catherine constructed the hanging surfaces for projection mapping. She use MadMaper to calibrate, and led hanging them, which we did as a team. Catherine also created all of the image visuals for the Processing code. I was responsible for creating the Processing code for the first phase of the concept, which I explain more later. Megan performed, choreographing a dance, picking music, and a costume. FuHao worked on the Phase 2 falling visualization in processing. Megan found and negotiated music, choreographed a dance, and performed the dance in the performance.

The color theme is reminiscent of the old 50’s cartoon of alice in Wonderland, centering on the Cheshire Cat.  Before switching over to a less saturated/boring real world color scheme that emphasized the transition to a new and colorful world.

Cheshire Cat

 

Color Scheme

For the finding phase, we use trees that have variable heights for the forced perspective illusion. After Catherine generated some initial trees, I created a generative algorithm that placed the trees in a kind of grid. Here’s a top down view:

Top down of camera and trees that creates parallax effect.

Top down of camera and trees that creates parallax effect.

 

Even though the trees are two dimensional pictures, they appear like they are in three dimensions. Because the threes are drawn, they appear three dimensional. To accentuate this feeling of depth, I used a parallax technique I first saw used in old Super Nintendo games.

The parallax technique is an extension of artistic principles of distance. In landscapes and other types of art, artists draw objects that are far away as closer together and with less detail. From the perspective of an forward facing observer, the closer an object is to the viewer, the faster it seems to move. So trees that are farther away move more slowly than trees that are close.

One technical issue I experienced in implementing this visualization was that some of the trees seemed to have black boxes around them. This was a simple Z buffer issue. Because I use low level access drawing mechanisms from within the Processing framework, I have to make sure that the objects in order from farthest from camera to the closes to the camera. Processing renders the pictures as I tell it to and is not a 3d engine meant for handling these issues. FuHao had similar problems that I notified him about, which he then fixed.

The idea for this scene is that the Kinect would be able to sense the depth of the dancer. However, as we came closer to the deadline, we realized that because this dance is choreographed, we could slowly change the size of the trees to have the desired effect with less technical complexity.

For the falling scene, phase two, objects move up in the screen and have a random rotation. The objects are again visual components drawn by Catherine. FuHao also create some cat eyes that randomly blink as they fall. All of the objects that fall allude to the story of Alice in Wonderland: clocks with randomly placed hands, the cheshire cat, bottles of potion, and more.

The idea is that a Kinect would sense the depth (distance from audience to the dancer) but modify the speed at which the performer appears to fall.
Catherine constructed two hanging surfaces from sheets of some kind of white polymer material. The wings were cut at angles so that the forced perspective effect would be stronger. Once we had one wing set up and hanging, we testing the projection mapping setup. The scale of the wings created a really immersive feeling to me. Later, as both wings were up, Catherine was able to create a kind of box by using bits of sky from the trees for the top, a spotlight from a white circle.

Getting processing to work with MadMapper was straightforward. We used a Syphon plugin. However, this created an extra programming step where one draws to an image in memory rather than the usual processing routines.

Performance

As I briefly pointed out before. I was happy with the way that the visuals turned out in terms of immersion, scale, and the ad hoc spotlight. The Friday before the performance, everything seemed like it was coming together. We finally had a song picked out, the visual components seems close to correct, and the wings were up and looking good. The mac mini seemed slow, but it was working. If you look at the commit log on github, you can see the number of commits on Sunday and Monday is higher than normal.

Through some lapse in communication combined with my own schedule on that Sunday and Monday being very very busy, the last minute integrations didn’t happen. This resulted, lamentably, in including a handful of the graphics Catherine created. She included more trees in SVG format. However, the code for trees expected PNG files (which are rastered). FuHao thought that the performance was scheduled for Wednesday instead of Monday. In addition, the code that FuHao had did not include code compatible with Syphon. So I integrated what was there.

To make things worse, the mac mini we used for the performance went even more slowly than when we tested it before. Because of this increase in lag, the timing of the music was off, which understandably, disoriented Megan. The visible artifacts caused by the lag made everything look jumpy and just bad.

It went pretty poorly compared to how I envisioned it. Anything that could have gone wrong did.  Here a a couple of photos by Glen Vigus in Viz from the performance and a video of the visualization from processing.

 

Screen Shot 2013-05-01 at 12.16.01 AM

http://www.youtube.com/watch?v=If5WkSgyyUg

 

Lessons Learned

Having little time to prepare a performance can create a disaster. A corollary is that the more time one has to prepare a performance, the better it will be. Perhaps these short term sketches amplify what might go wrong. To avoid doing things the wrong way, do them the right way. Get it right fist and then you don’t have to change it later. We should have come to an agreement about the technical specifications and tested on the actual hardware sooner. Having time for the performer to test the complete system also would have helped our performance be better.

Short time frames for projects create a pressure that accentuates strengths and weaknesses. We produced very interesting artifacts in a short amount of time, but the poor integration didn’t show that work in a very good light.  But, perhaps, that is what a sketch is.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *