Archive for January, 2009

Offline GMail: Solution to a flaky ISP connection?

Having a poor ISP connection, or worse, no connection to the internet, need no longer be a problem if we want to believe the hype surrounding Google’s latest GMail innovation:  Offline GMail.  According to this Wired Epicenter Blog post  Google’s latest offering “will enable Gmail users access from their browsers even when they aren’t online”.  This sounds like black magic to me, but I think it’s rather a case of exaggeration or maybe interpolation of the facts.

In reality, Offline GMail relies on a caching system that is managed by an open source add-on called Gears which stores data locally on your desktop. Gears can be run under  Windows XP and Vista, Mac OSX and Linux operating systems and is designed to run under IE, Firefox and Safari browsers.

Since Offline GMAil is still in beta, Google is warning potential users that some glitches still need to be ironed out, so if you download this add-on – caveat emptor (or for non-latin-speaking aviators: watch your six!).

About T3: Educational Technology and beyond …

Click on the “About T3” tab above for an overview of what this site is all about.

Usability in Human Computer Interaction (HCI)

As modes of learning move towards increasingly online interactions the viability of the interface has become critical to the success (or otherwise) of how we acquire knowledge in the twenty-first century.

In this “Medieval Helpdesk” video clip, broadcast on Norway’s NSK in 2001, a monk gets help on “How to Use a Book”, perhaps the first complex user interface that learners had to grapple with.

David Kieras

David Kieras

Professor David Kieras is a researcher in the University of Michigan’s Electrical Engineeringand Computer Science Department.  His research field is applied and theoretical cognitive psychology, with a specific focus on usability in human computer interaction (HCI). In this video lecture, given at CHI’08 in Firenze, Italy (April 7-10 2008), he discusses current cognitive approaches to evaluation of interfaces, icons, affordances, display design and HCI modeling in general.

In the section of the lecture that discusses input basics and aimed movements, Professor Kieras make the interesting observation that zeroing in on small targets (e.g. on a monitor screen with a mouse) requires micro-movements that conform to Fitts’ Law, a model of human movement developed by Paul Fitts in 1954.  The law predicts that smaller targets require more micro-movements than larger ones – which is what you would guess intuitively to be the case – and also helps to explain why different types of input devices such as mice, keyboards, joysticks and trackballs each have advantages depending on the task being performed.  Keyboards, for example, are still the fastest devices for inputting linear data, and the QWERTY keyboard layout takes advantage of the fact that alternating-hand input is faster than using a single hand most of the time in an  alphabetic layout – due to the way the information is processed at the cognitive level.

Some Small Steps for Mind Reading by Machines

Brain and Coke logoI’m not sure exactly why, but 2009 is shaping up to be a breakthrough year for mind reading by machines. A recent CBS News 60 Minutes item, broadcast on January 4th, 2009, looks at current research on using brain scanning (neuroscanning) technologies such as magnetoencephalography  (MEG), functional MRI (fMRI) and powerful computational approaches to determine what a subject is thinking about, whether they have previously been in a particular location, how they really feel about a product, or what their true intentions are.

Shari FinkelsteinCBS Interviewer Shari Finkelstein talked to several researchers in this field about how they are beginning to make sense of brain-scan images by relating them to stimulus images that subjects were asked to think about while being scanned. Carnegie Mellon researcher and psychologist Marcel Just demonstrates the use of fMRI scans and a specific algorithm he developed with co-researcher  Tom Mitchell, head of Carnegie Mellon’s School of Computer Science’s Machine Learning Department, to correctly identify ten items that a subject was asked to think about in random order.  Here’s a video of a “thought reading demonstration” done by Just and Mitchell, and an extended abstract by Tom Mitchell titled “Computational Models of Neural Representations in the Human Brain”, published in Springer’s Lecture Notes in Artificial Intelligence, 2008.

John-Dylan HaynesMs Finkelstein also interviewed John-Dylan Haynes, a researcher at Humboldt University’s  Bernstein Centre for Computational Neuroscience in Berlin about the use of fMRI to scan subjects’ brains as they moved through a virtual reality (VR) setting.  By monitoring the subject’s scans while specific rooms of the VR are replayed, researchers can reliably determine if the subject had visited that room – i.e. they can detect visual recognition of a previously-viewed scene.

Here’s a video lecture titled “Decoding Mental States from Human Brain Activity” given by Professor Haynes at a recent conference in Jerusalem (5th European Conference on Complex Systems – ECSS’08).  He uses Blood Oxygenation Level Dependent (BOLD-fMRI) imaging which can achieve a claimed 85% accuracy in determining what item a person is thinking about.  Interestingly, he mentions that there are only two “specialized”cortical modules in the brain for thinking about visual items – one for faces and one for houses.  All other thoughts are held as “distributed patterns of activity”, that can be decoded and read out, given the correct classification and decoding techniques.

Paul Wolpe Psychiatrist Paul Wolpe, Director of Emory University’s Center for Ethics in Atlanta, Georgia, discusses ethical and legal issues arising from mind-reading research with 60 Minutes in this video extract  The research has spawned a whole new field of legal study, known as “neurolaw”, which looks at subjects such as the admissibility of fMRI scans as lie-detection evidence in court.  Professor Wolpe is concerned that, for the first time in  history, science is able to access data directly from the brain, as opposed to obtaining data from the peripheral nervous system.

Gemma CalvertA new approach to selling, known as “neuromarketing”, makes use of neuroscans to determine subjects’ responses to visual or aural stimuli and the effect that has on their desire to purchase goods. Professor Gemma Calvert, Managing Director of Neurosense Limited, a market research consultancy,  specialises in the use of MEG and fMRI neuroscanning  techniques for marketing purposes, such as predicting consumer behaviour.

Dutch marketing researcher Tjaco Walvis concludes that the brain’s recognition and retrieval of information about brands occurs in the same way that a Google search engine retrieves links related to a search term.  Read a MarketWire article on his research here.

To me, this marketing application of what is essentially exciting science is getting a bit too close to the “dark side” for my liking.  In a previous article I mentioned the psychological and political aspects of applied neuroscience research, where brain monitoring is becoming an increasingly real possibility.  Paul Wolpe alludes to this when discussing  recent research into covert scanning devices that use light beams to scan an unsuspecting subject’s frontal lobe (see my previous post on Hitachi’s use of IR light to perform brain scans).  I suppose we should now add consumer monitoring to the list.

[images sourced from:  here (brain), here (Shari), here (Haynes), here (Wolpe) and here (Calvert)

Enhanced by Zemanta

Mind Flex: a brain-scanning toy from Mattel

MindFlexMattel Inc. has just released a new toy at the Consumer Electronics Show (CES) in Las Vegas which began this week.  The Mind Flex includes a head set that uses basic neurofeedback to control real objects, such as a foam ball that is elevated by miniature fans and can be guided through a customisable obstacle course.  The device uses simple detection of alpha and theta brain rhythms to control the speed of the fans.

The idea of using neurofeedback as a mind-controlled input device is not particularly novel. I’ve commented previously on Emotiv’s commercialisation of just such an interface, although the EmotivEpoc makes somewhat wider claims than the Mind Flex when it come to describing exactly what the software is capable of controlling. Their device also detects neuromuscular activity such as winks and smiles, which can be used for directional control of mobile devices – Emotiv shows the device controlling an electric wheelchair.  What makes the Mind Flex notable, however, is that Mattel has put together a toy that targets children eight and over and will be offered at a price comparable with a video game (around US$80). Compare this with the EmotivEpoc which will be priced at US$299 once it’s out of beta and on the shelves.

Mattel has bought into this type of interface technology before.  In 1989 it developed the Powerglove, a haptic device that sought to replace the mouse or keyboard when the user interacted with an on-screen game developed by its partner Nintendo for use with the  Home Entertainment System. Mattel was able to keep the price of the technology low by employing a printed form of a strain-gauge for each finger joint of the glove.

While complete control of an interface by mind alone is probably still at the feasible-but-not-yet-do-able stage, devices such as the Mind Flex  constitute the first small steps in that direction.  Practically speaking I think the most likely “holistic” systems will  combine inputs such as voice, gesture and brain waves, and even the occasional keypress to achieve effective communication in a computer mediated setting.  I’m still waiting for the inevitable emergence of computer-aided telepathic enhancement (CATE), where the computer takes a back seat, quietly enhancing and transmitting or receiving our thoughts in the background while we just think at each other.  Not too different from what happens already when humans “empathise”, perhaps?  The system would just take the guesswork out of interpreting someone else’s intentions.

(image sourced from here).

Here’s a video of Mindflex in action uploaded in 2009:

USB 3.0: Swifter, Higher, Stronger

USB 3.0OK, maybe not higher or even stronger, but certainly swifter!  The specs for a new USB standard (version 3.0) were released late last year and I’m including this entry in the blog because this promises to have a major impact on device connectivity in the IT/ed tech world.  USB 2.0 has a transfer speed that peaks at around 480 megabits per second, while version 3.0 offers a tenfold increase – to 4.8 gigabits per second.  This means that peripherals such as video cameras, MIDI instruments and colour laser printers will communicate with a CPU at considerably faster speeds than currently possible with USB 2.0 connections.  A developer group called Symwave has already jumped on the bandwagon and will demonstrate a USB 3.0 storage drive at this month’s Consumer Electronics Show (CES) in Las Vegas, January 8-11.

The IEEE 1394b interface standard, known as FireWire 800 in Apple’s version, can transfer data at around 800 Mbits/sec in  full duplex mode, faster than USB 2.0 but only a sixth of the speeds possible with USB 3.0

The move in device connectivity is certainly toward USB these days, judging by the range of USB-enabled devices now available to enhance basic desktop and laptop machines, so this long-awaited upgrade should stimulate the development of further ways to connect peripheral stuff that might actually turn out to be useful in an educational sense.  Imagine USB 3.0-enabled musical devices synchronised with video clips driving high-powered projectors in an outdoor venue – this could change performance arts in ways we can only begin to imagine. Or not.  It really depends on how far you want to take it, in both a technological and aesthetic sense.

(image sourced from: http://www.gadgetvenue.com/wp-content/uploads/2008/01/usb-3_0.jpg)

UPDATE:  (16/01/09)  A recent ZDNet update on this item can be found here with a mention that Apple plans to ditch FireWire in favour of USB 3.0.   See why.


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4 other subscribers

Blog Stats

  • 10,908 hits