Built on the engine that produced Gibson's successful music and light installation, Virtual DJ, the MINDful Play Environment (MPE) utilizes is a proprietary motion tracking system, called the Gesture and Media System (GAMS), created by APR, Inc. of Edmonton, Canada. The system allows us to organize space in a 3D grid and program media elements like light, music, spoken word, video, and animations in zones and points on the grid. These elements respond to infrared hand-held tracking devices, or "trackers," much in the same way that a page is evoked when a cursor, driven by a mouse, touches a hyperlink on a webpage.
The set up of the room using this technology requires four infrared cameras mounted in each corner of the room and pointed down to the performance space. These cameras are linked to one another with the aid of a Firewire connected to a PC that runs the propriety software, called Flashtrak, controlling three Martin 250 Entour robotic lights and the Martin fog machine. The PC, in turn, is connected to a Mac sitting beside it that houses the software Ableton Live and Modul8, which drive the MIDI information that makes up the media. This setup means that when a player moves in the space, her or his motion is tracked by the cameras and sent to the PC, which, in turn, evokes the light programmed in that particular space as well as sends the data to the Mac so that it can trigger the music and video.
Besides programming the space, each tracker is programmed to control various kinds of media and their behaviors. For example, in MPE Tracker One to evoke drums, Tracker Two to play bass while Tracker Three takes melody. Additionally, each tracker controls video output projected on to one of three walls situated in the front and on the two sides of the room as well as controls light from one of the robotic lights. The image below shows the configuration of the space as well as the media assignments for all three trackers.
User actions with regard to both 3D space and the other performers change the way the three different music and video channels interact with each other. Precise correlations are established between user movement, sound, video, and light. For example, when Player One (holding Tracker One) moves front and back (or the "y-plane") in the space, she changes drum sounds, video clips, and light colors, and when she moves up and down (z-plane), she changes drum volume, video clip opacity, and the light dimmer. At the floor all three of these values are set to minimum, which means that when she raises her hand to 100 cm, she is able to fade each of these three values to maximum. Her side to side movement (or "x-plane") changes low-pass filter and delay in the drums. This synaesthetic reinforcement allows her to learn the environment and navigate the space more easily.
Additionally, if Players Two and Three move toward one another, the proximity of one tracker to another affects the audio and video. For example, when Player Two approaches within 1.5 meters of Player Three, the bass will increase its distortion level, and the current video controlled by the melody will also appear in the bass video screen, blended with the bass video. All three trackers are programmed to respond to one another similarly. In this way, MPE is intended to encourage collaborative learning through kinesthetic play.
The physical embodiment of x, y, z coordinates and spatial mapping combined with the potentiality of textual representation of data along with media focusing on sound, images, and light make MPE a unique and robust site, we think, for learning concepts relating to both math and science as well as language.
While motion tracking technology has seen wide use for physical therapy, surveillance, and entertainment, it has yet to be utilized in education. Additionally, the incorporation of movement in a classroom environment has not yet been fully realized outside of disciplines like dance and physical education where physical activity is seen as a necessary component of the discipline. Furthermore, print is still the media of choice in most classroom settings. These lay in contrast to our everyday experiences where print, radio, and television have been replaced by video computers, cell phones, and iPods as preferred communication devices. Joysticks, gameboy interfaces, and IMing offer highly physical and dynamic interactions with information. The growing popularity of games like Nintendo's WII and DDRGaming’s Dance Dance Revolution means that young people have not only become accustomed to media-rich environments made possible by multimedia technologies but also those that offer kinesthetic and kinetic opportunities.
Thus, we anticipate that the media rich and kinesthetic environment of the MINDful Play Environment will be highly conducive to the process of learning. Our research provides the opportunity to develop and test a classroom of the future that utilizes technologies and sensory modalities that can potentially change the way we teach and impact students' success with learning. It also has the potential of altering current views of education that compartmentalize rather than combine art/performance, science/math, and the humanities for teaching higher level thinking skills.