In a previous post I discussed the development of a gestural interface that resulted from a mock-up used by Tom Cruise in the 2002 film Minority Report, set in the year 2054. A speculative science consultant on the film production team, John Underkoffler, then went on to create a real version of the fictional technology, called g-speak. Below is a TED talk from 2010 where John uses g-speak to discuss the future of the User Interface. As he states in the presentation, a lot of what we want to do with computers is “inherently spatial”, so a gestural interface is a bit of a natural as far as design goes:
Two years down the track from that presentation, the technology has moved relentlessly onwards. Here’s a clip about Leap Motion that makes the concept of a glove-less gestural interface a potentially commercial reality:
Critics will obviously point out that it’s probably less physically taxing to use a desk-bound mouse rather than gesticulating in the air, but I see this interface as one component of a multi-gestural/textural/audio-based approach to interfacing with computers. A really useful system would allow you to speak, type or gesture as appropriate to the creative context. I saw a demo of an electrical engineer manipulating switch connections in 3D on a mock-up of a Tepco powergrid that serviced the greater Tokyo area in the 90s. Back then the interface required a wired glove to move the virtual connections around, a simpler version of the Underkoffler interface above. A Leap Motion interface would accomplish the same thing (and more) without the use of a glove.
The Leonar3Do system uses a 3D mouse device called the “bird” to accomplish many of the same tasks as Leap Motion. Its creators describe the system as “the world’s first desktop VR kit.” Well, maybe not actually the first consumer VR kit, but its certainly impressive in how quickly it lets you put together complex shapes in 3D – something that would generally take a lot of button-pushing and mouse-sliding in a 3D program such as Blender.
Published April 27, 2009
Tags: Da Vinci Surgical System, G-Speak, Intuitive Surgical, Massachusetts Institute of Technology, Minority Report, MIT Media Lab, Patti Maes, SixthSense, Surgery, TED, Tom Cruise, Virtual Reality
Tom Cruise in Minority Report
When Tom Cruise used a gesture-driven video display to search for future criminals in the film Minority Report he was interacting with a CGI-enhanced setup designed in part by John Underkoffler, a former PhD student of MIT’s Tangible Media Group. In a fantasy-become-reality scenario, Underkoffler went on to form Oblong Industries, a design group that released a first version of G-Speak, a working version of the gestural interface from the movie, in November 2008.
A portable technology that takes a related role is SixthSense , developed by Pranav Mistry at MIT Media Lab’s Fluid Interfaces Group – a wearable computer that projects its display onto any surface and uses hand or finger gestures for interaction. For example, users can take a snapshot of a landscape scene simply by framing the scene with their (colour-coded) fingertips, similar to Tom’s gesture in the image above. This technology extends the “multi-touch” concept into a “multi-gesture” mode, allowing users to engage in more complex forms of visual interaction. Watch a video of the FIG’s Professor Patti Maes describing the system at a TED talk in February 2009 below.
Critics of gestural interfaces usually point out that existing interface devices such as touchpads, mice and keyboards require less physical effort than gesticulating in space with arms extended, as the G-Speak interface seems to require. Personally, I think this type of interface has a place in certain disciplines where creative manipulation of virtual or real 3D objects can enhance learning. Virtually conducting a synthesised orchestra could be one way of exploiting the potential of a gestural interface. Conductors could quickly experiment with the placement of virtual orchestral performers – maybe bring the French Horns in closer and push the Harpist off to the left, etc.
Adding force feedback to the interface opens up further potential for creative interaction or precise procedural activity. Intuitive Surgical’s da Vinci Surgical System provides surgeons with sufficient tactile feedback to allow precision surgery where the actual procedure is done entirely by a human-guided robot. Virtual musical instruments such as bowed devices can provide musicians with sufficient synthetic feedback to create a virtuoso performance or create entirely new forms of music. Get in touch with your inner interface soon.