Posts Tagged 'Artificial intelligence'

Cognitive Computing is Closing In

English: IBM's Watson computer, Yorktown Heigh...

IBM’s Watson computer (photo licenced through Wikimedia Commons)

A few years ago I did some research on intelligent tutor systems, specialised software programs that could help a learner to grasp the basic concepts of a discipline, learn a specific procedure, or just organise documents more intuitively or accessibly.  Back then, the Holy Grail of artificial intelligence (AI) was to somehow achieve machine consciousness, that is, to build a machine that demonstrated awareness of its own existence and its surroundings including coherent interaction with humans, and the ability to adapt to changing circumstances in an intelligent way. Above all, an AI machine demonstrating “consciousness” would have the ability to learn over time.

Ray Kurzweil used the term “singularity” to mark the point in some future development when a machine successfully demonstrates all of the above characteristics, perhaps ushering in a whole new era of human/machine existence. That date still seems a long way off and the predictions of pundits in the pioneering days of AI about the speed of development of AI systems (think 2001: A Space Odyssey’s HAL 9000 – the film was released back in 1968) seem almost laughable now.

Other researchers in the field have concentrated on building systems that demonstrate some aspects of intelligent behaviour, such as skill in playing chess.  IBM has been at the forefront in this field for many years and achieved success at Computer Chess World Championship level with Deep Thought (1988-89) and Grandmaster level with Deep Blue’s defeat of Garry Kasparov in 1997.  Both machines relied on brute-force computing to look up to 10 moves ahead in a chess match.  That is, they used rules-based reasoning and massive computing power to achieve success while operating within a closed logic space.

Dealing with the subtleties of natural language, however, has proven to be a much more daunting task.  IBM’s “Watson” is a computer system that Wikipedia describes as: “specifically developed to answer questions on the quiz show Jeopardy!”  The show’s format requires contestants to discover the correct question to a supplied answer, a task that relies heavily on deductive reasoning and access to a large number of known facts about popular culture and discipline-specific knowledge.  Watson is an amalgam of a range of software programs including C++, Java, Prolog and IBM’s proprietary DeepQA.  The diagram below demonstrates some of the complexity involved in DeepQA’s analysis of language:

High-level architecture of DeepQA used in Watson

High-level architecture of DeepQA used in Watson

Enhanced by Zemanta

Learning Analytics: The positive side

I recently attended a conference where the buzz topic was learning analytics (LA) and their use in online learning environments. One of the keynote speakers, Simon Buckingham-Shum, described a possible future where an LA program is used to analyze a student’s input to an online forum using advanced AI techniques.  I’m sure I wasn’t the only member of the audience who cringed at the thought of a machine used in this way.  The idea that your personal thoughts, attitudes or opinions could be dissected using such a seemingly inhumane approach goes against the grain for a lot of people. But what if that same analytic engine was used in a formative learning setting where the whole idea was to support a student’s learning and provide learning materials appropriate to the level of mastery attained so far?

Salman Khan

Salman Khan: Image by O'Reilly Conferences via Flickr

A recent article in Inside Higher Ed looks at the work done by Salman Khan‘s team of analysts at the Khan Academy, an online learning site that covers a huge range of learning topics from basic maths to advanced calculus, economics, biology, physics and others too many to mention.  I’ve been following the development of this site for several years now and watched a couple of the maths videos to understand the teaching methodology used – and, as a side bonus,  to refresh my understanding of basic maths principles.  Salman’s teaching style is renowned for its relaxed yet clear delivery.  He manages to make even advanced, complex topics seem obvious and easy to understand.  But as Salman himself says; “I think too much conversation about Khan Academy is about cute little videos …”.  For him, the real action is in using data collected from learners who access the Khan Academy to construct LA applications that can ultimately predict their future performance and adjust the learning materials accordingly.

Khan engineers are attempting to promote genuine mastery and to distinguish it from “pattern-matching” exercises, which form the basis of  a large proportion of summative assessments used at all levels of learning and teaching. To accomplish this, they are tracking and analysing student interactions when logged in to one of the Academy’s online courses. Using algorithms developed to predict stock market movements, they can predict  a student’s likely future performance in solving different problem types. If the prediction is that a student is highly likely to correctly solve problems of a similar type, then the inference is that mastery has been achieved.

In my opinion, the Khan Academy’s approach is a great example of LA used for good – as opposed to evil. By that I mean that using LA as a means of summative assessment of a student’s understanding is currently not achievable in any reasonable sense and may amount to an unfair summary of their true comprehension of a topic.

Enhanced by Zemanta

iCub the Robot Toddler


iCub robot

The iCub is a robot project that uses the mind and physical dimensions of a human two-year old as its basic design model.  Its developers, at a European Union consortium reason that using a two-year old’s mind as a starting point, the iCub will gradually develop and learn much as a young human would, due to its ability to consolidate new information in meaningful ways. The project’s coordinator, Prof. Giulio Sandini, located at the University of Genova, believes that human intelligence derives from physical interaction with the world as well as mental processes, so the iCub’s dimensions are an important component of its future development.

The video below gives a nice overview of the project:

Reblog this post [with Zemanta]

Some Small Steps for Mind Reading by Machines

Brain and Coke logoI’m not sure exactly why, but 2009 is shaping up to be a breakthrough year for mind reading by machines. A recent CBS News 60 Minutes item, broadcast on January 4th, 2009, looks at current research on using brain scanning (neuroscanning) technologies such as magnetoencephalography  (MEG), functional MRI (fMRI) and powerful computational approaches to determine what a subject is thinking about, whether they have previously been in a particular location, how they really feel about a product, or what their true intentions are.

Shari FinkelsteinCBS Interviewer Shari Finkelstein talked to several researchers in this field about how they are beginning to make sense of brain-scan images by relating them to stimulus images that subjects were asked to think about while being scanned. Carnegie Mellon researcher and psychologist Marcel Just demonstrates the use of fMRI scans and a specific algorithm he developed with co-researcher  Tom Mitchell, head of Carnegie Mellon’s School of Computer Science’s Machine Learning Department, to correctly identify ten items that a subject was asked to think about in random order.  Here’s a video of a “thought reading demonstration” done by Just and Mitchell, and an extended abstract by Tom Mitchell titled “Computational Models of Neural Representations in the Human Brain”, published in Springer’s Lecture Notes in Artificial Intelligence, 2008.

John-Dylan HaynesMs Finkelstein also interviewed John-Dylan Haynes, a researcher at Humboldt University’s  Bernstein Centre for Computational Neuroscience in Berlin about the use of fMRI to scan subjects’ brains as they moved through a virtual reality (VR) setting.  By monitoring the subject’s scans while specific rooms of the VR are replayed, researchers can reliably determine if the subject had visited that room – i.e. they can detect visual recognition of a previously-viewed scene.

Here’s a video lecture titled “Decoding Mental States from Human Brain Activity” given by Professor Haynes at a recent conference in Jerusalem (5th European Conference on Complex Systems – ECSS’08).  He uses Blood Oxygenation Level Dependent (BOLD-fMRI) imaging which can achieve a claimed 85% accuracy in determining what item a person is thinking about.  Interestingly, he mentions that there are only two “specialized”cortical modules in the brain for thinking about visual items – one for faces and one for houses.  All other thoughts are held as “distributed patterns of activity”, that can be decoded and read out, given the correct classification and decoding techniques.

Paul Wolpe Psychiatrist Paul Wolpe, Director of Emory University’s Center for Ethics in Atlanta, Georgia, discusses ethical and legal issues arising from mind-reading research with 60 Minutes in this video extract  The research has spawned a whole new field of legal study, known as “neurolaw”, which looks at subjects such as the admissibility of fMRI scans as lie-detection evidence in court.  Professor Wolpe is concerned that, for the first time in  history, science is able to access data directly from the brain, as opposed to obtaining data from the peripheral nervous system.

Gemma CalvertA new approach to selling, known as “neuromarketing”, makes use of neuroscans to determine subjects’ responses to visual or aural stimuli and the effect that has on their desire to purchase goods. Professor Gemma Calvert, Managing Director of Neurosense Limited, a market research consultancy,  specialises in the use of MEG and fMRI neuroscanning  techniques for marketing purposes, such as predicting consumer behaviour.

Dutch marketing researcher Tjaco Walvis concludes that the brain’s recognition and retrieval of information about brands occurs in the same way that a Google search engine retrieves links related to a search term.  Read a MarketWire article on his research here.

To me, this marketing application of what is essentially exciting science is getting a bit too close to the “dark side” for my liking.  In a previous article I mentioned the psychological and political aspects of applied neuroscience research, where brain monitoring is becoming an increasingly real possibility.  Paul Wolpe alludes to this when discussing  recent research into covert scanning devices that use light beams to scan an unsuspecting subject’s frontal lobe (see my previous post on Hitachi’s use of IR light to perform brain scans).  I suppose we should now add consumer monitoring to the list.

[images sourced from:  here (brain), here (Shari), here (Haynes), here (Wolpe) and here (Calvert)

Enhanced by Zemanta

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4 other followers

Blog Stats

  • 10,095 hits