Posts Tagged 'Virtual Reality'

Give your keyboard the finger (or the bird)

In a previous post I discussed the development of a gestural interface that resulted from a mock-up used by Tom Cruise in the 2002 film Minority Report, set in the year 2054.  A speculative science consultant on the film production team, John Underkoffler, then went on to create a real version of the fictional technology, called g-speak.  Below is a TED talk from 2010 where John uses g-speak to discuss the future of the User Interface. As he states in the presentation, a lot of what we want to do with computers is “inherently spatial”, so a gestural interface is a bit of a natural as far as design goes:

Two years down the track from that presentation, the technology has moved relentlessly onwards. Here’s a clip about Leap Motion that makes the concept of a glove-less gestural interface a potentially commercial reality:

Critics will obviously point out that it’s probably less physically taxing to use a desk-bound mouse rather than gesticulating in the air, but I see this interface as one component of a multi-gestural/textural/audio-based approach to interfacing with computers.  A really useful system would allow you to speak, type or gesture as appropriate to the creative context.  I saw a demo of an electrical engineer manipulating switch connections in 3D on a mock-up of a Tepco powergrid that serviced the greater Tokyo area in the 90s.  Back then the interface required a wired glove to move the virtual connections around, a simpler version of the Underkoffler interface above.  A Leap Motion interface would accomplish the same thing (and more) without the use of a glove.

The Leonar3Do system uses a 3D mouse device called the “bird” to accomplish many of the same tasks as Leap Motion. Its creators describe the system as “the world’s first desktop VR kit.”  Well, maybe not actually the first consumer VR kit, but its certainly impressive in how quickly it lets you put together complex shapes in 3D – something that would generally take a lot of button-pushing and mouse-sliding in a 3D program such as Blender.

Enhanced by Zemanta

G-Speak and SixthSense: coming soon to a wall near you

Tom Cruise in Minority Report

Tom Cruise in Minority Report

When Tom Cruise used a gesture-driven video display to search for future criminals in the film Minority Report he was interacting with a CGI-enhanced setup designed in part by John Underkoffler, a former PhD student of MIT’s Tangible Media Group.  In a fantasy-become-reality scenario, Underkoffler went on to form Oblong Industries, a design group that released a first version of G-Speak, a working version of the gestural interface from the movie, in November 2008.

A portable technology that takes a related role is SixthSense , developed by Pranav Mistry at MIT Media Lab’s Fluid Interfaces Group – a wearable computer that projects its display onto any surface and uses hand or finger gestures for interaction. For example, users can take a snapshot of a landscape scene simply by framing the scene with their (colour-coded) fingertips, similar to Tom’s gesture in the image  above. This technology extends the “multi-touch”  concept into a “multi-gesture” mode, allowing users to engage in more complex forms of visual interaction.  Watch a video of  the FIG’s Professor Patti Maes describing the system at a TED talk in February 2009 below.

Critics of gestural interfaces usually point out that existing interface devices such as touchpads, mice and keyboards require less physical effort than gesticulating in space with arms extended, as the G-Speak interface seems to require.  Personally, I think this type of interface has a place in certain disciplines where creative manipulation of virtual or real 3D objects can enhance learning.  Virtually conducting a synthesised orchestra could be one way of exploiting the potential of a gestural interface. Conductors could quickly experiment with the placement of virtual orchestral performers – maybe bring the French Horns in closer and push the Harpist off to the left, etc.

Adding force feedback to the interface opens up further potential for creative interaction or precise procedural activity.  Intuitive Surgical’s da Vinci Surgical System provides surgeons with sufficient tactile feedback to allow precision surgery where the actual procedure is done entirely by a human-guided robot. Virtual musical instruments such as bowed devices can provide musicians with sufficient synthetic feedback to create a virtuoso performance or create entirely new forms of music. Get in touch with your inner interface soon.

Reblog this post [with Zemanta]

Some Small Steps for Mind Reading by Machines

Brain and Coke logoI’m not sure exactly why, but 2009 is shaping up to be a breakthrough year for mind reading by machines. A recent CBS News 60 Minutes item, broadcast on January 4th, 2009, looks at current research on using brain scanning (neuroscanning) technologies such as magnetoencephalography  (MEG), functional MRI (fMRI) and powerful computational approaches to determine what a subject is thinking about, whether they have previously been in a particular location, how they really feel about a product, or what their true intentions are.

Shari FinkelsteinCBS Interviewer Shari Finkelstein talked to several researchers in this field about how they are beginning to make sense of brain-scan images by relating them to stimulus images that subjects were asked to think about while being scanned. Carnegie Mellon researcher and psychologist Marcel Just demonstrates the use of fMRI scans and a specific algorithm he developed with co-researcher  Tom Mitchell, head of Carnegie Mellon’s School of Computer Science’s Machine Learning Department, to correctly identify ten items that a subject was asked to think about in random order.  Here’s a video of a “thought reading demonstration” done by Just and Mitchell, and an extended abstract by Tom Mitchell titled “Computational Models of Neural Representations in the Human Brain”, published in Springer’s Lecture Notes in Artificial Intelligence, 2008.

John-Dylan HaynesMs Finkelstein also interviewed John-Dylan Haynes, a researcher at Humboldt University’s  Bernstein Centre for Computational Neuroscience in Berlin about the use of fMRI to scan subjects’ brains as they moved through a virtual reality (VR) setting.  By monitoring the subject’s scans while specific rooms of the VR are replayed, researchers can reliably determine if the subject had visited that room – i.e. they can detect visual recognition of a previously-viewed scene.

Here’s a video lecture titled “Decoding Mental States from Human Brain Activity” given by Professor Haynes at a recent conference in Jerusalem (5th European Conference on Complex Systems – ECSS’08).  He uses Blood Oxygenation Level Dependent (BOLD-fMRI) imaging which can achieve a claimed 85% accuracy in determining what item a person is thinking about.  Interestingly, he mentions that there are only two “specialized”cortical modules in the brain for thinking about visual items – one for faces and one for houses.  All other thoughts are held as “distributed patterns of activity”, that can be decoded and read out, given the correct classification and decoding techniques.

Paul Wolpe Psychiatrist Paul Wolpe, Director of Emory University’s Center for Ethics in Atlanta, Georgia, discusses ethical and legal issues arising from mind-reading research with 60 Minutes in this video extract  The research has spawned a whole new field of legal study, known as “neurolaw”, which looks at subjects such as the admissibility of fMRI scans as lie-detection evidence in court.  Professor Wolpe is concerned that, for the first time in  history, science is able to access data directly from the brain, as opposed to obtaining data from the peripheral nervous system.

Gemma CalvertA new approach to selling, known as “neuromarketing”, makes use of neuroscans to determine subjects’ responses to visual or aural stimuli and the effect that has on their desire to purchase goods. Professor Gemma Calvert, Managing Director of Neurosense Limited, a market research consultancy,  specialises in the use of MEG and fMRI neuroscanning  techniques for marketing purposes, such as predicting consumer behaviour.

Dutch marketing researcher Tjaco Walvis concludes that the brain’s recognition and retrieval of information about brands occurs in the same way that a Google search engine retrieves links related to a search term.  Read a MarketWire article on his research here.

To me, this marketing application of what is essentially exciting science is getting a bit too close to the “dark side” for my liking.  In a previous article I mentioned the psychological and political aspects of applied neuroscience research, where brain monitoring is becoming an increasingly real possibility.  Paul Wolpe alludes to this when discussing  recent research into covert scanning devices that use light beams to scan an unsuspecting subject’s frontal lobe (see my previous post on Hitachi’s use of IR light to perform brain scans).  I suppose we should now add consumer monitoring to the list.

[images sourced from:  here (brain), here (Shari), here (Haynes), here (Wolpe) and here (Calvert)

Enhanced by Zemanta

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4 other subscribers

Blog Stats

  • 10,908 hits