Archive for the 'Uncategorized' Category

Blaise Pascal’s Wager: A Game of Probabilistic Theism

Portrait of Blaise Pascal

Blaise Pascal, Mathematician

Apart from his legendary status as a mathematician and early inventor of a type of calculator, Blaise Pascal (1623-1662) is also know for his forays into early notions of probability theory, which he applied to a discussion on why betting on the actual existence of God has a type of logic to it, based on likelihoods and probabilities. This discussion, presented in his monogram Pensées – ‘Thoughts’ (1670), and known popularly as Pascal’s Wager, has occupied the minds of philosophers and theologists over the ages, often attracting orthodox critics who suggested that such a discussion is somehow too blasphemous for (certain) Christians to seriously consider.

Pascal’s Wager proposes the following: (excerpted from Pensées, Section III, Note 233, and summarised as numbered propositions in a Wikipedia article):

1. God is, or God is not. Reason cannot decide between the two alternatives.
Pascal asserts that we cannot use arguments based on logic to prove the existence, or otherwise, of God. The inference here is that there are only two possibilities possible, the actual existence of God, independent of reason, logic or belief, or the non-existence of God. Either possibility would need to be absolute, i.e., truly independent of the existence, or otherwise, of ourselves.

2. A Game is being played… where heads or tails will turn up.
Here Pascal is framing his wager as a simple binary event, based purely on chance, which involves a person betting on his or her own future. What is clearly missing is the other player. Is it actually oneself?– which suggests a rather conflicted view of personal existence – or, perhaps, God?

3. You must wager (it is not optional).
This third step is comparable to asking someone to choose to either move away from where they are standing or to stay where they are. If they do nothing (i.e. reject the request), they effectively accede to the second choice of staying put. The end result is the same; as if they had consciously decided not to move away. This seems like a slightly unfair premise to me – i.e. that the choice presented is purportedly a pure, equally-weighted binary choice, where in fact the choice is forced, whether the person chooses to participate in the wager or not. Thus, as Pascal rightly determines, the wager is not actually optional.

4. Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing.

If a belief in God is somehow proven correct, there is an assumption that the God of your faith will reward your belief. This is a rather self-fulfilling prophecy based on the premise that the type of God you said you believe in is benevolent and will always reward believers. None of this is actually stated by Pascal, though, and must be inferred from the logic step itself.

The idea that betting on the proposition that God exists and later finding out that you were wrong is somehow a lossless outcome is not proven, at least in his assumption as stated above. Very few people can easily deal with the discovery of a false belief, often leading to regret, cynicism and loss of hope – hardly a lossless outcome.

5. Wager, then, without hesitation that He is. (…) There is here an infinity of an infinitely happy life to gain, a chance of gain against a finite number of chances of loss, and what you stake is finite. And so our proposition is of infinite force, when there is the finite to stake in a game where there are equal risks of gain and of loss, and the infinite to gain.

The logic here is more a suggestion that the reader take a position, and not just any position, but the one supporting the idea of the existence of God. Pascal supports this suggestion by spelling out the alternatives, effectively a pitch for belief. Once again, he describes the wager as having “equal risk of gain and loss” which rather contradicts his claim, above, that losing actually loses nothing.

6. But some cannot believe. They should then ‘at least learn your inability to believe…’ and ‘Endeavour then to convince’ themselves.

This is Pascal’s pitch for greater self-awareness and also an encouragement to take a position, as if the only reason for non-belief is some flaw in the reader’s character that is blocking their ability to believe what, to Pascal at least, is an acceptable notion.

More than anything, I see this as Pascal’s entreaty to take a “leap of faith” as a means of resolving uncertainties in life (e.g. questions such as; “Why are we here? What is life’s purpose?”, etc.) that can’t be analysed and explained through pure reason.

Uncertainty

Pascal discusses uncertainty in life extensively in Pensées, but the two categories that seem to me to be highly significant are: uncertainty in reason, and uncertainty in scepticism. In the former case, he states:

”There is nothing so conformable to reason as this disavowal of reason.“ (Pensées, Note 272)

Apart from the fact that this seems to address a rejection of reason rather than a mere uncertainty about it, one way of interpreting the statement is that scepticism is healthy, while a second way would promote the idea that people who reject the logical propositions he presents are uncertain about reason itself. My own interpretation is that he is commenting on the nature of reason and the fact that ultimately there is no escape from it, even for those who reject it.

In discussing uncertainty in scepticism, Pascal states:

“It is not certain that everything is uncertain.” (Pensées, Note 387)

This self-referential statement seems to be self-defeating, almost proving the case for uncertainty, although the fact that something is not certain is not exactly the same as something being uncertain. To illustrate the point; if I state that:

“All citizens of Lebanon are speakers of Arabic”

it would be reasonable for a listener to remark that I couldn’t possibly be certain about the truth of my statement without interviewing all of Lebanon’s citizens, a clearly impossible, conditional task. This is qualitatively different from a listener question such as: “Are you sure about that?” which raises doubts about the statement’s truth, but doesn’t actually contradict it.

My take on Pascal’s statement is that he is promoting the idea that the statement that everything is uncertain has little to do with what we mostly observe in our daily lives. We begin each day certain that the sun will rise, that bridges will stay intact as we cross them and that the laws of physics, such as the gravitational force that keeps us attached to the Earth, will continue to apply. To live any other way would be to succumb to fear, uncertainty and doubt.

Of course, non-material notions such as uncertainty about another’s intent or sincerity are what I would call healthily sceptical, but they should never rule a person’s existence to the point where mitigation becomes impossible. Certainty goes hand in hand with hope, which has to be a good thing, most of the time.

Reference

Pascal, Blaise (1670). Pensées. English translation by John Walker (1688).

Impostor Syndrome Blues

We all know that feeling; a sometimes momentary, sometimes extended sensation where we question our own status, skill or understanding of something, concluding that we don’t really know what we think we know and are therefore “faking it” for the benefit of others. This phenomenon has been labelled Impostor Syndrome, with the obvious reference to a state of mind that derives from serious doubts about one’s true sense of self.  Although the label is usually applied to social situations, it can have serious repercussions for an individual’s state of mental health, particularly where the sensation is prolonged and impacts directly on how that individual is able to function in society or deal with internal thoughts of self-doubt.

I’m certainly no stranger to this. I have often caught myself “winging it”, where the best course of action in uncertain circumstances seemed to be to just keep talking or moving forward, hoping that some sense of clarity about what’s unfolding will spontaneously emerge. I usually see the idea of “making it up as I go along” in a rather self deprecating way, but on the other hand it can be surprisingly satisfying when an improvisation actually works out OK in the end.

Impersonating someone other than yourself is fairly common – actors make a living out of doing just that, after all – so it’s not so surprising that cinema provides many examples of people dealing with identity.

Akira Kurosawa’s seminal screenplay Kagemusha is based on the idea of a real impostor acting as a surrogate for a samurai warlord in 16th Century Japan.The film depicts the impostor’s personal journey; traversing his initial fears and doubts, accession to real power, and subsequent paralysis when faced with a real conflict situation.  The Japanese term “kagemusha” literally means “shadow warrior” and it’s usually used in the context of a “decoy” strategy in a political power play, as depicted in the film. It provides some interesting insights into the mind of a person placed in the situation of assuming the identity of another. A similar concept is explored in Joseph Losey’s 1963 film The Servant, starring Dirk Bogarde – although here the act of taking on the master’s role is a deliberate strategy.

The Fear of Self-deception

Underlying Impostor Syndrome appears to be a fear of deceiving oneself. In Hamlet Polonius advises his son Laertes to be authentic to himself, saying “This above all: to thine own self be true”.  Shakespeare’s  message in this passage is obviously that not lying to yourself is really important, even strategic, because he then has Polonius explain that “Thou canst not then be false to any man.” The corollary concept is the idea that if you can’t be honest with yourself what hope have you of being honest with the people you meet? I like this idea of self-honesty because it reminds me that humility is almost always better than hubris.

A recent article in news.com’s online Lifestyle pages tiltedFaking it? You’re not alone. Imposter syndrome is more common than you’d think” explores this phenomenon and provides examples from a range of commentators including film celebrities, teachers and scientists. The writer concludes that “we’re all faking it”, and I tend to agree. It’s those people who claim to be 100% authentic one hundred percent of the time that worry me the most.

Mind Boxing: An internal martial and healing art

Mind Boxing is a loose translation of the Chinese internal martial art known as Yìquán (literally: “intent to grasp something scattered”). It’s called an internal martial art because it comes from the nèijiā tradition which includes the styles that focus on internal strength aspects (nèijìng) such as mental and spiritual qualities and the use of qi (breath, spirit), rather than the external aspects of muscular strength and physical interaction (wài). The modern practices of Tàijíquán (t’ai chi), Qìgōng, Reiki and Aikido all derive from this internal strength tradition and their popularity around the world attests to its high status in both martial arts and healing practices.

photo of yiquan master wang xiangzhai

Photo of Yìquán master Wáng Xiāngzhāi- Courtesy Wikipedia

Yìquán‘s principle proponent was a Wudang-style Xíng Yì Quán (literally: “form-will boxing”) Master, Wáng Xiāngzhāi, who also popularised a postural practice called Zhan Zhuang (literally: “pole standing”) which has the main purpose of strengthening postural muscles that are not normally under our conscious control but can be influenced by exercises  designed for that purpose and are undertaken with intent (the character’s literal meaning). What distinguishes Yìquán from other martial arts is the fact that Master Wáng believed that it should be learned as a mental form without direct reference to other antecedent styles – hence the translation of Yìquán as “mind boxing”.

I find this concept quite intriguing. Most martial arts follow a fairly predictable pattern where students are expected to recreate patterns of movement without question to the point where muscle memory kicks in and movement happens as almost a reflexive response to a given context.  Yìquán seems to reverse this idea, asking the student to shape movements that derive from postural forms (Zhan Zhuang) and breathing patterns into active movements.  Master Wáng expressed his own doubts that others were able to master his approach, saying that they failed to get the point of the lessons that Yìquán practices try to teach.

Cognitive Computing is Closing In

English: IBM's Watson computer, Yorktown Heigh...

IBM’s Watson computer (photo licenced through Wikimedia Commons)

A few years ago I did some research on intelligent tutor systems, specialised software programs that could help a learner to grasp the basic concepts of a discipline, learn a specific procedure, or just organise documents more intuitively or accessibly.  Back then, the Holy Grail of artificial intelligence (AI) was to somehow achieve machine consciousness, that is, to build a machine that demonstrated awareness of its own existence and its surroundings including coherent interaction with humans, and the ability to adapt to changing circumstances in an intelligent way. Above all, an AI machine demonstrating “consciousness” would have the ability to learn over time.

Ray Kurzweil used the term “singularity” to mark the point in some future development when a machine successfully demonstrates all of the above characteristics, perhaps ushering in a whole new era of human/machine existence. That date still seems a long way off and the predictions of pundits in the pioneering days of AI about the speed of development of AI systems (think 2001: A Space Odyssey’s HAL 9000 – the film was released back in 1968) seem almost laughable now.

Other researchers in the field have concentrated on building systems that demonstrate some aspects of intelligent behaviour, such as skill in playing chess.  IBM has been at the forefront in this field for many years and achieved success at Computer Chess World Championship level with Deep Thought (1988-89) and Grandmaster level with Deep Blue’s defeat of Garry Kasparov in 1997.  Both machines relied on brute-force computing to look up to 10 moves ahead in a chess match.  That is, they used rules-based reasoning and massive computing power to achieve success while operating within a closed logic space.

Dealing with the subtleties of natural language, however, has proven to be a much more daunting task.  IBM’s “Watson” is a computer system that Wikipedia describes as: “specifically developed to answer questions on the quiz show Jeopardy!”  The show’s format requires contestants to discover the correct question to a supplied answer, a task that relies heavily on deductive reasoning and access to a large number of known facts about popular culture and discipline-specific knowledge.  Watson is an amalgam of a range of software programs including C++, Java, Prolog and IBM’s proprietary DeepQA.  The diagram below demonstrates some of the complexity involved in DeepQA’s analysis of language:

High-level architecture of DeepQA used in Watson

High-level architecture of DeepQA used in Watson

Enhanced by Zemanta

Ingress could really augment your reality

Macquarie University

Macquarie University –     photo Wikipedia

Seems like Augmented Reality (AR) apps are about to change our daily lives in a major (epic?) way lately. A recent series of AR workshops convened by Macquarie University’s Learning and Teaching Research Cluster group (LTRC) elicited participant responses ranging from idle curiosity to something approaching Shock and Awe.  Not so much because of the graphic awesomeness of some of the projections used as exemplars, but rather the enormous potential for development that became apparent as attendees came to realize just what this technology can offer.  AU has only recently begun to emerge from a hacker community setting into the harsh reality of commercial networks with R&D dollars to burn – on anything that currently seems to be capturing the mind of early adopters and netizens generally.

Google is  backing and promoting an AR app called Ingress that leads participants to geo-locations in a city, then lets them “hack” the location and link it to other locs, capturing territory in the process.  It’s a game you have to play on a real landscape and involves physical and mental effort in spades. Here’s a longish vid that takes you through the Ingress setup:

They’ve chosen to promote the game through a series of viral video clips that provide more questions than answers about what the game is all about. Actually, a quote from the Ingress Initial Briefing page can probably explain the purpose of the game better than I can:

“The primary goal of the game is defend the takeover of the human race by an unknown “Shaper” force or, depending on your perspective, to assist in an “Enlightenment” of mankind through an alliance with the Shapers.”

Basically, you have to choose between the Resistance and Enlightenment factions, not unlike Tris choosing Dauntless over Erudite in Divergent, the first book in Veronica Roth’s futuristic trilogy. Read  Rachel Metz’s excellent overview of Ingress in e-mag MIT Technology Review. This game is currently invitation-only and runs on Android-based phones for the moment, with an IOS version promised in the near future.

While waiting,  IPhone owners may want to opt for a similar cityscape game called Shadow Cities. The free app can be downloaded from the iTunes App store.  Here’s the trailer:

Enhanced by Zemanta

Mapping and Predicting Fluid Intelligence

British-American psychologist Raymond Cattell

Raymond Cattell – photo Wikipedia

The idea of dividing Spearman‘s theory of general intelligence (g) into two separate concepts; fluid (gF) and crystallized (gC) intelligence, is not particularly new. Psychologist Raymond Cattell, best known for his work on defining and analysing personality types and the Cattell Culture Fair Test, published a paper on his theories on intelligence back in 1941 (Cattell, 1941).  Horn and Cattell (1967) later described fluid intelligence as “…the ability to perceive relationships independent of previous specific practice or instruction concerning those relationships.” That is, it’s the type of intelligence we bring to bear on previously-unseen problems we encounter in daily life that we need to find solutions for, and that may require abstract thinking.  Contrast this with crystallized intelligence, which relies on previous experience and long-term memory, and may also be influenced by cultural factors.

In a previous post from early 2009 I discussed the use of functional Magnetic Resonance Imaging (fMRI) techniques to map human neural activity to visual recognition of  previously-viewed scenes. Back then, researcher John-Dylan Haynes used Blood Oxygenation Level Dependent (BOLD-fMRI) imaging to determine what item a subject might be thinking about.  More recently, researchers at  universities in the US and Slovenia have used similar BOLD scanning techniques to map global connectivity of the prefrontal cortex to predict cognitive control and intelligence. Their research is based on the premise that “individuals with higher intelligence ha[ve] more efficient whole-brain network organization (van den Heuval et. al., 2009)”.  The researchers first tested their 94 young adult subjects for general intelligence using Cattell’s Culture Fair Test. They subsequently mapped the global connectivity of their subjects’ brains between specific regions in the Lateral Prefrontal Cortex (LPFC) and then correlated their findings with the results of the fluid intelligence tests mentioned above. This allowed them to make statistically-significant predictions about their subjects’ levels of intelligence.  The research concluded that “a specific region’s global connectivity [i.e. the relative strength of its neural connections] predicts intelligence” (Cole et. al., 2012).

The implications of this research for learning and teaching are not entirely clear.  Intelligence testing using any methodology can be contentious at the best of times.  Cattell’s attempt to ameliorate the effects of culture on intelligence testing, the Culture Fair Test, has its fair share of both supporters and critics, the latter generally commenting that measures of intelligence should not be based on single assessments, no matter how well designed to counteract the effects of biases such as culture and economically-deprived education.

From my own perspective, an undesirable outcome of the research discussed above would be the use of such quantitative measures of predicted intelligence to stream children into a range of vocational education programs supposedly best suited to their predicted future abilities and consequent needs.  This approach assumes that young adults are likely to be incapable of making wise choices regarding their own futures  (which is, regrettably, already the situation for some children whose parents see them as a projection of themselves and their social status).  The freedom to choose one’s own path, and to make changes along the way is, in my opinion, a fairly basic human right.  Fortunately, the ability to successfully re-invent yourself has some recent research to back it up; neural plasticity studies, as discussed by Norman Doidge (2007) in “The Brain that Changes Itself“, at least offer the hope that most human brains can successfully re-organize, even in late adulthood, according to changes in a person’s circumstances, immediate environment, or due to training specifically designed with that purpose in mind. Stanford’s Carol Dweck (2000) has her own take on how people see their own level of intelligence (as a personal “mindset”) and how they can realise their own potential through approaches based on personal growth.

References
Cattell, R. B. (1941). Some theoretical issues in adult intelligence testing. Psychology Bulletin 38, p.592

Horn J. L. & Cattell R. B. (1967). “Age differences in fluid and crystallized intelligence.” Acta Psychologica, 26, pp. 107-129.

Cole, M. W. , Yarkoni,T., Repovš, G., Anticevic, A. and Braver, T.S. (2012). Global Connectivity of Prefrontal Cortex Predicts Cognitive Control and Intelligence. The Journal of Neuroscience 27 June 2012, 32 (26), pp. 8988-8999

van den Heuvel M. P., Stam C.J. , Kahn R.S., Hulshoff Pol H.E. (2009).  Efficiency of functional brain networks and intellectual performance. The Journal of  Neuroscience 29: pp. 7619–7624.

Doidge, N. (2007). The Brain that Changes Itself. Penguin Books, London.

Dweck, C. (2000). Self-theories: their role in motivation, personality, and development. Hove: Psychology Press

Enhanced by Zemanta

Give your keyboard the finger (or the bird)

In a previous post I discussed the development of a gestural interface that resulted from a mock-up used by Tom Cruise in the 2002 film Minority Report, set in the year 2054.  A speculative science consultant on the film production team, John Underkoffler, then went on to create a real version of the fictional technology, called g-speak.  Below is a TED talk from 2010 where John uses g-speak to discuss the future of the User Interface. As he states in the presentation, a lot of what we want to do with computers is “inherently spatial”, so a gestural interface is a bit of a natural as far as design goes:

Two years down the track from that presentation, the technology has moved relentlessly onwards. Here’s a clip about Leap Motion that makes the concept of a glove-less gestural interface a potentially commercial reality:

Critics will obviously point out that it’s probably less physically taxing to use a desk-bound mouse rather than gesticulating in the air, but I see this interface as one component of a multi-gestural/textural/audio-based approach to interfacing with computers.  A really useful system would allow you to speak, type or gesture as appropriate to the creative context.  I saw a demo of an electrical engineer manipulating switch connections in 3D on a mock-up of a Tepco powergrid that serviced the greater Tokyo area in the 90s.  Back then the interface required a wired glove to move the virtual connections around, a simpler version of the Underkoffler interface above.  A Leap Motion interface would accomplish the same thing (and more) without the use of a glove.

The Leonar3Do system uses a 3D mouse device called the “bird” to accomplish many of the same tasks as Leap Motion. Its creators describe the system as “the world’s first desktop VR kit.”  Well, maybe not actually the first consumer VR kit, but its certainly impressive in how quickly it lets you put together complex shapes in 3D – something that would generally take a lot of button-pushing and mouse-sliding in a 3D program such as Blender.

Enhanced by Zemanta

Blink twice and you’re on the internet: Augmenting reality with new technologies

Arizona State University‘s Dr Mina Johnson-Glenberg recently made some interesting observations in her presentation, titled Embodied, Gesture-based Learning – Mixed Reality and Serious Games at a Learning & Teaching Research Cluster meeting here at Macquarie University.  Her research looks at the impact of embodiment and gesture on the learning that occurs in virtual settings. Some of her research programs involve getting students away from computer screens and into display spaces that require gestural input to make things happen.

According to Dr. Johnson-Glenberg we’re about six years away from having commercially available contact lenses with built in displays, capable of putting up a net browser or augmenting the visual reality with additional text-based information.

Here’s an amusing and slightly dystopian view from Eran May-raz and Daniel Lazo. on how augmented reality might work in a personal sense:

Sight from Vimeo.

The video clip from Keiichi Matsuda, below,is another commercially-enhanced view of how augmented reality might appear from the user’s point of view:

Augmented (hyper)Reality: Domestic Robocop from Keiichi Matsuda on Vimeo.

The Google Glass project incorporates a single-eye heads-up browser/data display into a lightweight headband. The latest version also captures and shares video. Like Dr Johnson-Glenberg, Google seems keen to get people away from the desktop and more into the (augmented) environment. The design team made deliberate choices about the positioning of the image viewer (above the direct line of sight, monocular only), presumably to discourage people from viewing while driving or walking into traffic.  Like mobile phones, this technology offers incredible advantages such as access to instant information for its users, but also come with its own risks in terms of personal safety.

Enhanced by Zemanta

The MOOC experience: Learning in free online courses

A Massive Open Online Course (MOOC) offers anyone with the time and and an internet connection to explore a large range of educational topics in depth,  usually for no cost apart from net connection fees.  Available topics range from complex courses on micro-electronics that duplicate face-to-face courses taught at institutions such as MIT, to less formal, peer-driven courses based on a community of enquiry model.

One of the great things about the internet’s expanding web of connections around the world is the increasingly large number of free online education opportunities that come with it. The MIT/Harvard consortium called edX has recently added UC Berkeley to the mix, making it the first consortium to include a public university in offering free not-for-credit courses, according to this article in the Los Angeles Times.

The MOOC early-adopter institutions include Stanford University in California, offering videos of Masters course lectures in the late 1990s, MIT with OpenCourseware and more recently with MITx, Yale University’s Open Yale courses, and the Open University‘s LearningSpace in the UK.

Peer-driven learning spaces rely on the enthusiasm of learners who are curious enough about a topic to freely offer what they know already or to join a group of learners who may help them to learn further.  A good example is Peer to Peer University (P2PU), which describes its mission as providing a place where: “people work together to learn a particular topic by completing tasks, assessing individual and group work, and providing constructive feedback”.

Acquiring knowledge for its own sake is one thing, but some learners want acknowledgement of the time and effort they put into learning a new skill or discipline, either to reinforce feelings of self-worth or to enhance their future job prospects.  Mozilla Open Badges caters for learners who want visible recognition of the skills they have acquired online or out of school by offering badges that represent attainment in online courses or projects run by affiliated members.

Enhanced by Zemanta

Bisociation: Koestler’s Take on Humour and Scientific Enquiry

An image of the Carbon 60 molecule, also called Buckminsterfullerene

Buckmiinsterfullerene (C60) - image via Wikipedia

One of Hungarian-born Arthur Koestler‘s best-known non-fiction works is probably The Act of Creation, written in 1964.  In this seminal text he explores the notion of human creativity from a number of angles, including humour, science and the arts. Koestler’s premise is that creative acts are likely to involve the synthesis of concepts derived from two or more different bodies of thought or disciplines that are not normally related, a process he called “bisociation”. The concept has since been formalised as “Conceptual Blending/Integration” theory within cognitive science by Gilles Fauconnier and Mark Turner.   A relatively recent example of this associative process in science is the discovery of the structure of macro molecules of Carbon 60 (C60) in 1985 by Harold Kroto, Richard Smalley and Robert Curl at Rice University. They determined that a stable allotrope of carbon consisting of 60 carbon atoms could be produced with a high energy laser directed at a graphite disk, but were initially unable to determine its exact crystalline structure.

Author Arthur Koestler

Arthur Koestler - Image via Wikipedia

The team reasoned that the structure had to be spherical, but it took Kroto’s memory of a visit to Expo 67 in Montreal where he walked into a geodesic dome designed by architect Buckminster Fuller to trigger the realization that the molecule must be based on an icosahedron (specifically, a truncated icosahedron) with 60 vertices; the exact shape of a standard soccer ball and logically the most stable arrangement of that number of carbon atoms. This insight led the team to call the C60 allotrope “buckminsterfullerene” in honor of the genius from another discipline and was the starting point for the whole new science of nanotechnology.

So how does humour come into the picture?  Koestler discusses the mechanism of humour in similar terms to the synthesis of ideas mentioned above. “Punchline” humour, for example, depends on a wholly unexpected conclusion for its effect.  It’s as if you’ve been led down a garden path then directed to turn left, only to be confronted by an elephant’s knee (or a spring-loaded boxing glove). Humour uses exaggeration, imagination and absurd juxtapositions of seemingly unrelated things to create a comic effect, somewhat like the bisociative processes discussed above that are the basis of most creative acts in Koestler’s analysis.

One of the highlights of Open Day at Macquarie University is the Chemistry Magic Show put on in one of the lecture theatres by dedicated members of the School of Chemistry every year.  The master of ceremonies manages to combine humour with science in a wholly engaging way while showing the audience some rather dramatic chemical processes – each of which manages to supply its own punchline.  If only all science subjects were taught that way …

Enhanced by Zemanta


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 4 other subscribers

Blog Stats

  • 10,908 hits