A computer’s ability to process large amounts of graphic data has seen the development of some exciting new fields over the last fifteen years or so. Light field photography, sometimes referred to as “synthetic aperture” photography, is one area that would not have been possible without the availability of computing power capable of manipulating large graphic data files in a reasonable amount of time. Most contemporary laptops with a graphics card included are more than up to the task, but prior to 1990 this ability was generally confined to specialist graphics labs such as scientific visualisation sites or, more recently, commercial operations such as Adobe’s Advanced Technology Labs (ATL).
The last ten years has seen the gradual convergence of digital photography with computer graphics imagery (CGI) applications, driven largely by commercial CGI production companies such as Pixar (producers of the animated features Toy Stories 1and 2) and Animal Logic (Happy Feet), online games producers and visualisation labs.
NASA recently released a series of photographs of its latest Mars project, the Phoenix Mars Lander, which included something they call a “Flyover Animation“. This animation was compiled from the data captured by the Lander’s two-lens Surface Stereo Imager camera, then rendered as a Quicktime movie. The flyover appears to show the Lander in a tracking shot as it moves slowly to the right across the landing site. In fact, the movie exploits a graphics approach closely aligned with Light Field photography known as Image Based Rendering (IBR). where a series of 2D images are combined into a 3D graphic object which can then generate “novel views” through interpolation of the data. In 1999 two researchers at Stanford University used a similar technique to produce a light field photographic series of “Night” – a statue created for the Medici Chapel in San Lorenzo by Michelangelo, around 1534.
Both Light Field Photography and IBR approaches are examples of plenoptic modelling, a term originally coined by Edward Adelson and John Wang in a 1992 paper titled “Single Lens Stereo with a Plenoptic Camera“. The word plenoptic is derived from the Latin word root for “complete” or “full”, combined with the Greek word root for “view” or “sight” – which makes the term rather self-evident. In 2005, Stanford University researchers and others implemented Adelson and Yang’s proposed plenoptic camera (which had never progressed beyond a basic non-portable prototype) as a hand-held plenoptic camera.
So what does all this have to do with educational technology? Well, when you think about it, educators are doing a lot of thinking about engagement recently. Courses delivered online compete with online games and virtual environments that may have little or suspect educational value. One of the problems with online education is that it’s difficult to strike a good balance between text-based and image-based resources. Too much text read off a relatively low-res screen becomes tiring (and even headache-inducing) after a short time, while graphics-rich pages may lack the depth necessary for deep research. A partial solution would be to somehow enhance the information carried in the visual elements of a page. If a picture is worth the proverbial thousand words, then a plenoptic picture has got to be worth considerably more.
Making web-based information more engaging has as much to do with presentation as it does with the content itself. The ability of blogs like the one you’re currently reading to aggregate dynamic information such as world news through RSS feeds has little engagement value if the presentation consists largely of pages of text with the occasional token graphic thrown in. One group has recently sought to address this problem by developing an application that builds simulated broadsheet newspaper pages from an RSS feed. The system is able to create an adaptive, simulated hard-copy version of The New York Times, for example, drawing “inspiration from newspaper design” for the broadsheet’s general layout and authentic-looking masthead.
Why does this matter? Well if your learning topic was to do with ethical journalism you could learn a lot by the way newspaper editors choose to present disparate news items to their readers – in terms of their layout, prominence or placement on the page, or even which page they ended up on. Comparing the relative “newsworthiness” of items such as celebrity marriages and large-scale human disasters (e.g. tsunamis) in newspapers presented as authentically as possible can tell you a lot about what the proprietors think, or what they want you to think. It all comes down to how we process information. Learners in the 21st century have so much more information coming at them than their forbears that they need new strategies for arranging and filtering it, to avoid an overload situation, if nothing else. More on this topic soon…