Sketch 2 is about using projection mapping techniques to fit the theme of Space Harmony. For this sketch, we are going to show forced perspective, and hopefully create a feeling of falling though a rabbit hole.
The theme come’s from Lewis Carroll’s Alice in Wonderland. We consider this scene from the 1951 cartoon:
From this and discussions with my team and instructors we decided to:
Have a 30 second exploration phase
Begin falling for the rest
Use floating for the falling dance
Have objects to add texture
If possible, we are going to show the tunnel/hole as more warped as Alice (Megan) “falls” down the rabbit hole.
Learning from previous experience with the Kinect, we are going to make it control the speed of decent, but it will be passive. This will give our dancer as much space as she wants to move around. When she moves into the sensor range, she can adjust the speed of decent and it will stay where she puts it.
This monolithic post reflects on my experience with Sketch 1 for the Interactive Performance and Technology course. I say monolithic because it been better to post my progress more as I went along. Perhaps the memoir nature provides better context.
DirctIVYsenses a dancer’s movements with a Kinect to map the directness and indirectness of a dancer’s movements to closed/angular and open/organic visualization on a projected screen. The idea of directness and indirectness comes from Laban’s ideas on the space dimension of Laban efforts. Laban uses three dimensions of relationships of movements: time, weight, and space. My favorite diagram of this is from Laban’s Effort(Laban, Rudolf, and Lawrence, F. C. Effort. (1947). London: MacDonald and Evans.).
Instead of identifying movements from individual efforts, the concept is to measure and visualize movement higher level and more abstract level. Identifying dimensions of effort on a scalar level affords greater expression visually than measuring binary representations.
To put this concept into practice, the team and I collaborated to develop, test, and perform what we could with the given time, technical, and contextual constraints. I have organized this post in terms of the lessons learned in (1) ways of thinking affect communication, (2) collaboration , (3) Processing, working with (4) Kinect and recognition, and reflections on the (5) performance.
Ways of Thinking (1)
Ways of thinking here refers to my own thoughts and internal perspectives as well as the perspectives and of my teammates.
I didn’t understand that word very well before starting this sketch. It sounds very etherial compared to the normal diction I hear in the Computer Science discipline. In Computer Science, I literally get paid and feed my family based on my ability to discretize and organize information from the real world into useful computational models. It seems like this word references artistic perspective that acts to produce products. Instead of boxing up a set of styles or attributes, “aesthetics” acknowledges the tacit nature of the knowledge of the artist/designer.
My team is diverse and has diverse perspectives, producing diverse ways of communicating. While I am a Computer Science student, I am taking the course as Visualization student to expose myself to visualization software and techniques. I’ve already taken Computer Animation and Computer graphics, so I wanted to learn something new. I like to communicate with demos that show a proof of concept (you can see an example of this a few posts back).
Fuhao works in Dr. Chai’s lab. He used charts to communicate the effectiveness of the direct/indirect recognizer. Megan, the Dance Science student, would show us dances. Catherine drew sketches and iterated on concepts. Alex, the industrial engineer, used examples and code to communicate.
In addition to in-person contact, we use three systems to collaborate: email, facebook, github.
We also tried to use google docs
Email: Megan used email to send us the music files she worked on. I found email to be useful for sending binary files.
Our Facebook group (which was created the day after the team was assembled) had the most immediate communication responses, but conversations became disorganized quickly. Posts in a Facebook feed are ordered by a combination of recency and user interest. Retrieving discussion threads that were more than a couple of days old required scrolling past the loaded content.
I’m a bit of a github.com evangelist. At the beginning of last Fall, I moved around 70 code repositories of the Interface Ecology Lab from SVN to github. Our team repository is at github.com/rhema/art. Github enables issue tracking, which can integrate goals, milestones, issues, code commits, and team communication. For example, in issue #1, Fuhao and I figured out how to integrate Processing with UDP communication from a Kinect application.
The revision history for the sandbox folder in github shows an evolution in my own process for creating visualizations for this concept.
This example shows me creating 10 LabanLine s at .1 increments of directness. This helped me, but I eventually moved on to animating a visualization of just one line. After we collected data on the first friday after the team was assembled, I asked Catherine to show me a filigree. These can be drawn as spirals.
A little direct.
Once I had a basic implementation for this, I used data we collected as a team that Fuhao prepared an put it into a python simulator. I also added a way to consume the UDP in Processing with Java sockets. This let me test how the dancer would be seen by the system. The example bellow is an early visualization of the dancer.
You see filigrees and the circle around the dancer gets bigger and smaller based on the directness (controlled by keyboard at this point). In the next version, I have added a visualization that uses a convex hull of the dancer instead of the awkward circle. I have also added leafs that Catherine drew on the tangents of the filigree.
The convex hull felt much more expressive, however, it changed shapes quite rapidly and felt distracting. An improved result follows.
It’s hard to tell by the picture, but what is happening now is that the convex hull has been reduced to a width and height maximum. Also, the filigrees are now fading as they move.
The box that is generated uses curves with random offsets at each point. So each line of the box is made curvy by placing extra points on the line, randomly moving them, and drawing all of the points with curve() from processing. Her, the line goes from A to B. The straight line represents the path the line would take if it weren’t curvy. The yellow lines show the path from the points on the straight line to the spot with added randomness.
This is closer to the final product, with more varied colors around the box.
(4) Kinect and Recognition
My previous post shows the idea I had for measuring directness as the predictability of a path. It looks like this does not work in practice, either because of resolution errors or our own failure to tweak the system. Instead, Fuhao took an nearest neighbor approach in Matlab using 3 data collections: indirect dancing, direct dancing, and alternating between both.
Nearest neighbor search is a technique used in information retrieval and artificial intelligence to perform classification of data. In Fuhao’s adaptation, he finds wether the current dancer position is closer to training data from direct or indirect to get an estimation of the directness.
To collect this data, we used a Kinect and asked Megan to dance in directly, indirectly, and alternating every 10 seconds. Each collection was about 30 seconds. The raw files are about 1 gig each! The rendered files that contain the xyz points are 500kb in ASCII and 100kb in binary.
At first, the system we used from the T.A. was limited to a throughput of 10 fps per second. Peizhao helped get the Kinect sending 30 fps. While we used a 30 fps system for the in the dress rehearsal, I think we must have accidentally used the 10 fps system in the performance.
The performance went really well, but I’m not sure that all of the elements really matched each other very well. There were many factors that affect the overall experience of the piece that I didn’t anticipate, manage, or plan well: lighting, the sensor contraints, and tweaking minor things in general.
Lighting is an issue with interactive performances that use projection behind a dancer because you want to keep the projections screen bright, but also want to light the human performer. If you simple use a light on from the back, then it washes out the projection screen. We used a barn door to light Megan, but we didn’t put much thought on it.
The constraints of the sensor limited the dancing space to about a 3 by 9 foot space. This is a much smaller space than Megan is used to dancing in and it represents less than one third of the overall projection screen width. To put Megan “in” the box, we have to take into account the perspective of the audience. Our system works well for someone looking directly from the front center of the audience. Because Megan was situated about 15 feet from the projector, the audience to the left or right of center saw the box too far to the left or right.
I thought about scaling the box visualization to be relative to the space (see figure above), but ran out of time. The idea would be to increase the amount of expressible space.
Catherine and I did some last minute tweaks before the performance. Because the system for measuring directness wasn’t integrated in time, I added a smoothing function for the keyboard modified direct/indirect number. The smoothing function uses a value of wanted directness and moves the actual directness to be 10 percent close each frame. I also added four vines to the box instead of one and replaced all random functions with Perlin noise instead of uniform random. We also added some flair effects. You can see the performance on the video below:
First, get the code by cloning the repository at http://github.com/rhema/art . (look up git or github if you need to)
The folder we are interested with here is processing/sandbox/communication.
This folder contains the files:
./data/DirectIndirectRawData.txt Kinect data from a raw (ascii values) format that we don’t use anymore.
./data/kinect_indirect_slower.bin Kinect data from binary format. If you want to know about the format, look at our discussion on github.
./udp/processing_consumer/consume_udp/consume_udp.pde Example listener and visualizer in processing.
./udp/python_udp_simulator/load_data.py Opens files in /data.
./udp/python_udp_simulator/simulate.py The file you want to run to generate UDP data.
To run, run /udp/python_udp_simulator/simulate.py. This starts the UDP traffic. Then, open consume_udp.pde in processing (I used 2.5>). You should see this:
The processing file has some comments. The way that you might use it in a project is to copy this as a file in your sketch. You could use the variables that get updated below directly:
PVector root_position = new PVector(0,0,0);
Vector<PVector> all_positions = new Vector<PVector>(21);//raw kinect data
Vector<PVector> all_positions_p = new Vector<PVector>(21);//converted to 2d
all_positions_p contains screen coordinates. You need to make sure you don’t write to all_positions or all_positions_p or you may run into synchronization issues.
Or, you could call a different function from the within function dancerPositionAlteredEvent.
Please comment if you have questions and I will revise this post.