Final Project Brainstorming Part 2

After the initial week of brainstorming and rapid prototyping, the ideas we had still seemed a bit vague.  The team asked Federico to create some visuals for the Processing part of the project.  Instead, Federico showed up with 12 rough sketches on paper drawn in pen.  For the class time, which was a workday, we went over each of his sketches, which contained variations of ideas that used balls, Wiimotes, and crowd tracking.  At some point in the discussion, we seemed to converge on a particular idea that was from the sketch.  Everything clicked at the same time and seemed to fall into place.

Final Project Brainstorming Part 1

In ideation, the process of forming new ideas, people use divergent thinking before converging to accept a viable path for implementation.  This post reports the formative and iterative processes we used to generate ideas.   The nature of collaboration, multiple goals of teachers and students, and technological complexities constrained and set a context for accepted ideas we chose to implement.

We had about a week and one half to begin narrowing our search.  In the first session, which was after class, the four of us decided to wander outside to sit on the grass.  In our first discussion, we went around talking about what our concept of play meant.  In the discussion, we talked about throwing things in the air, such as baseballs, or other kinds of balls.  This kind of play differs from games in that there isn’t a goal or winner.

From that description, we thought we would need to track these objects using a camera and blob detection.  I thought putting a wiimote inside of a ball would give some expressive ability.  We immediately set to work on getting a prototype working.  Fu Hao played with an object tracker in OpenCV.  I started using OSCulator to start connecting a wiimote to Ableton Live.

(Click here to see the slides)

This is the set of slides that we presented.  The idea presented here is more fire related than what we ended up using.  It also expects a level of detection that we did not end up using.

 

 

Using OSCulator as a MIDI Bridge – OpenCV, Processing, OSCulator, and Ableton Live. Really.

Over the past few nights I have been working on bringing audio content from Processing into Ableton Live.  While, in general, I consider myself to be a proficient programmer, I have spent too many hours looking for ways get Python and other languages to speak MIDI.  My technical approach to connecting and encapsulating software components of the Interactive Performance and Technology  is to create network messages that work as interfaces.  This creates reusable components for future projects.

OSCulator supports communicating with Wiimotes and MIDI using a UDP based message protocol called OSC.  At this point in the project, the lack of complex messages is low priority, making way for hands on and empirical practice (hacking).

Combining thoughts from a discussion form and blog post about sending OSC messages, I made a Python based example and Java/Processing example for sending an OSC message that contains, pitch, velocity, and on/off.  I mapped the camera sensing from Fu Hao via Processing to send MIDI messages to Ableton Live. For the project, this allows us to connect the vibration of the Wiimote to the visuals on the screen to give the performer feedback.  It could also be used for additional sonification, but I’m not sure what should sound like.

Instead, here is a video of a demo of sonification control using this bridge.

What you can’t hear me explain in this video is that horizontal position is mapped to low and high pitches, while tempo is mapped from slow at the bottom to fast at the top.  This demo is surprisingly expressive, which is not really conveyed very well by this video.  If someone want’s more technical details about how this works, leave a comment.  This code will be available soon in at https://github.com/rhema/art .