12 minute read

Event: The Trotter Paterson Lecture

Vision impossible!

Pictured: Colin Blakemore FRS FMedSci is professor of neuroscience and philosophy in the School of Advanced Study, University of London, and Emeritus Professor of Neuroscience at the University of Oxford. He is a former chief executive of the British Medical Research Council. He helped establish the concept that the brain is ‘ plastic ’ and adaptive, allowing changes in organisation.

Advertisement

Peter Phillipson introduces the background to a groundbreaking lecture on vision that the STLD attended in February. The Society of Light and Lighting (SLL), established in 1909, arranges several lighting lectures and events throughout the year, often with three or four speakers talking about a latest trend in lighting. But every two years, there is a formal lecture given by just one speaker in the traditional academic style and many of these are based on ‘ white paper ’ peer-reviewed papers or expert experience. When I first went to one, 20 years ago, I did not think I would run them one day, and I was honoured to be asked. I try to invite the STLD, ALD and students to these events if they are of relevance. I have always seen lighting as one thing, but unfortunately, the lighting industry is still fragmented into quite distinct parts.

The lecture is named after two of the pioneers of the IES – the former name of the SLL – from the 1920s, and in 1949 it was announced in Nature that the Trotter Paterson Lecture would be run biannually. Its inaugural lecture was in 1951 and previous speakers have included the Nobel Prize-winning physicist Sir Lawrence Bragg.

This year ’ s lecture took place at the Bishopsgate Institute in London. Fittingly, it was the first venue in London to be built with electric lighting.

Two past speakers at the Trotter Paterson Lectures joined Colin Blakemore at the end of this year ’ s lecture: Prof John Barbur, second from left, spoke about vision in 2010, particularly about the pupil of the eye, and our own Bernie Davis, second from right, spoke in 2003 about the lighting of large televised events. They are pictured with Dr Kevin Kelly, left, then President of the SLL; Colin Blakemore, centre; and Peter Phillipson, right, who introduced Prof Blakemore and is charge of the Trotter Paterson Lecture.

Two past speakers at the Trotter Paterson Lectures joined Colin Blakemore at the end of this year’s lecture: Prof John Barbur, second from left, spoke about vision in 2010, particularly about the pupil of the eye, and our own Bernie Davis, second from right, spoke in 2003 about the lighting of large televised events. They are pictured with Dr Kevin Kelly, left, then President of the SLL; Colin Blakemore, centre; and Peter Phillipson, right, who introduced Prof Blakemore and is charge of the Trotter Paterson Lecture.

More than meets the eye

Words: Francis Pearce Photographs: John O’Brien

Francis Pearce summarises the recent SLL Trotter Paterson Lecture delivered by Professor Colin Blakemore. This is a revised article, which originally appeared in ‘The Lighting Journal’ , published by The Institution of Light Professional ILP in April 2014. There is more to vision than meets the eye. Very little of what is happening before our eyes actually reaches our brains, but what does is translated into a subjective and mainly imagined view of the world. Furthermore, what we think we ‘ see ’ is semantic, not visual in nature, and it is computed from tiny fragments of information not whole, flowing images.

That was the gist of this year ’ s SLL Trotter Paterson Lecture titled ‘Vision Impossible ’ , delivered by leading neuroscientist Professor Colin Blakemore, Director of the Institute of Philosophy ’ s Centre for the Study of the Senses, which pioneers collaborative sensory research by philosophers, psychologists and neuroscientists. As far as vision is concerned, ‘the conventional goal of scientists and philosophers is to understand how our continuous, apparently veridical experience of the world is generated from the retinal image ’ : their task ‘is to account for the miraculous transformation of that, which converts so little into so much’ .

“There is a contradiction between what we know goes on in our head and the wonderful seamless view of the world that we have, ” says Blakemore. While we experience the world subjectively like a detailed, real-time video stream, in reality our visual experience is ‘invented’ , created from tiny, disjointed packets of data. “Visual experience is discretely and sparely informed by data from our eyes. Shifts of gaze occurring about three times a second deliver data dumps to the brain, with most of the information content concerned with the portion of the image falling on the central fovea of the eye. During each snapshot, the brain gathers, encodes and stores only a tiny amount of information. ”

Vision allows us to infer what the outside world is like from the image on the retina of the eye, but its optics present a series of problems. For example, you can never derive with certainty the true shape of an object in 3D space because the image is two-dimensional; there is no way of disambiguating from the image alone.

In the 17th century, Rene Descartes was among the first to observe the retinal image directly and to understand the optics correctly, based on an experiment with an ox ’ s eye from the abattoir. He imagined that the formation of the image was an essential part of the process of understanding the world, and while the detail of his idea was absurd, the principle dominates current thinking.

Descartes ’ Dioptrics describes the parts of the eye, including the pupil, the interior ‘humours ’ that refract light on to the retina inverting the image, and the optic nerve, which, in his terms, transmits impressions of external objects to the mind or soul located in the brain. “Imagine his reaction to seeing a dead part of the body capturing and

A single rod in the eye can detect light lower than 10 photons.

Pictured: A single rod in the eye can detect light lower than 10 photons.

internalising a view of the world, ” says Blakemore.

Through experiment, we know that the fovea, at the centre of the macula region of the retina, is responsible for sharp central vision. The retina contains two types of photoreceptors, rods and cones, and in the 1990s, Trevor Lamb and Edward Pugh were able to demonstrate that individual rods are capable of creating a signal when they absorb a single photon.

But half the nerve fibres in the optic nerve carry information from the fovea, and the quality of the information coming from peripheral vision drops off sharply with distance. Out of the whole picture that we think we see, the high-resolution region of our vision is only equivalent to about the width of a thumbnail at arm ’ s length. This is what we use to ‘ sample ’ the world about three times a second, our eyes moving involuntarily by as little as half a degree and as much as 50 degrees each time. Not only is our vision filtered, it is moving the whole time. We piece together our view of the world from narrow snapshots.

Functional magnetic resonance imaging (FMRI) is helping to map the brain and track activity by contrasting blood flow in its different regions as we perform tasks such as looking at a stationary set of dots and a set that is moving. About a third of the human brain is involved with processing vision. There are more than 35 visual areas, each specialising in analysing an aspect of visual stimulus such as colour, movement and object recognition. But while we know that ‘information comes into the primary visual areas, how it is distributed doesn ’t explain how we see, ’ says Blakemore. “There is a gulf of understanding between the nerve cells responding to things in the image and the owner of those nerve cells having visual experience. We just don ’t understand that process. ”

Hermann von Helmholtz, regarded as the first to study visual perception, concluded that the visual information received by the eye is so poor that our view of the world can only be inferred. “The brain is essentially a computational instrument, ” says Blakemore. “One question you can start to ask is how sophisticated the computation is. An important computation that Helmholz recognised is that in order to have a stable view of the visual world, you need to know whether movements that happen on the retina are due to movement in things in the world or movements of you. ”

A phenomenon known as change blindness, in which a change in a visual stimulus goes unnoticed by the observer, shows that ‘the appearance of something distracting can apparently mask or conceal high-contrast changes in the rest of the image ’ , but there is something else happening as well: “We are extremely bad at detecting changes of scenes, ” says Blakemore.

Many psychologists and physiologists think we must be recording our eye movements and piecing views together, but to do this, we would have to remember the visual past. In computational terms, this is a nightmare and it is wrong, says Blakemore. “We lose everything from the past except what we are attending to, which is 40 or 50 bits of information out of the megabytes of stuff we are absorbing... When you see a room you are constructing hypotheses about the world that are not being visually sustained, but sustained by some kind of semantic memory. ”

Blakemore maintains that ‘ vision is not just a passive “feed-forward” computation ’ . “Our visual experiences are informed by our understanding of the nature of the world, which is derived in part from evolution – successive genetic changes that build the visual system and retain knowledge about what to expect about the world – but also through personal experience: what we learn about the world. ”

Observable phenomena include the ‘binding problem ’ . This is where what we ‘ see ’ is a composite of elements glued together by a set of assumptions, not the visual data. An example is known as the Thatcher Illusion (because it is demonstrated using a photograph of Margaret Thatcher). Take two identical pictures of a smiling face; cut out and invert the eyes and mouth on one image, and the viewer will still perceive both faces to be visually ‘ correct’ .

We also assume light comes from above, and so a picture of a footprint in dust can appear to be raised if the image is upside down. Blakemore concludes that ‘ vision is cognitatively informed’ . Another example is the way that photographs that have been doctored to narrow the depth of field look phoney to observers. This can ‘ only depend on our experience of the world and, in particular, photographs, ’ he says, and it is an example of ‘learnt experience impinging directly on what we see ’ .

In this demonstration, all the dots keep changing colour randomly, except the centre one, and these changes are perceptible. But when the entire set of dots rotates about the centre while still changing colour, we do not see the changes.

In this demonstration, all the dots keep changing colour randomly, except the centre one, and these changes are perceptible. But when the entire set of dots rotates about the centre while still changing colour, we do not see the changes.

The miraculous transformation of so little into so much

While planning the lecture I asked Colin Blackmore for a explanation of his choice title, writes Peter Phillipson. This was his explanation: “We experience the world subjectively like a detailed, seamless, real-time video stream. In reality, visual experience is discretely and sparsely informed by data from our eyes. Shifts of gaze, occurring about three times each second, deliver data-dumps to the brain. During each snapshot, the brain gathers, encodes and stores only a tiny amount of information, probably corresponding to the content of visual attention. The task of scientists and philosophers is to account for the miraculous transformation that converts so little into so much. And if conscious awareness is largely invented, why do we need to be conscious of anything?”

He went on to show that we tend not to notice changes in colour of an object in our general field of vision while the object is moving. It is as if there is a hierarchy in the order that we perceive things: movement being more important than changes of colour.

The eye only focuses on a tiny portion of our general field of vision. It is our brain that creates the joined-up constancy that we are used to and, in the process of filling in the gaps, it can fooled. It can be influenced by experience and can ignore secondary objects that are not the main focus of our gaze. For example, it is possible to focus on an array of coloured shapes such as squares, circles and triangles. If we notice just one moving, we often fail to notice the other shapes altering their colour or shape, as our attention was on the moving one. This is part of how the brain filters out the enormous quantity of data it receives from our senses. It has to, else we would not be able to function without an overview of what seems to be important to notice. This, in turn, can be influenced by what interests us individually, and it can lead to a range of interpretations from a group who experienced the same event.

There is a tendency, too, to see what is familiar. It is possible to make faces out of clouds or ‘ see ’ patterns in the Moon ’ s surface such as footsteps. These are fashioned only by the shadows that were actually present and our imagination.

The conclusions that we must therefore glean from this very small resume of his lecture are that there is essentially no visual past, and most of what we see is cognitively informed, and probably dependent on personal experience.

Professor Blackmore ’ s lecture included imbedded videos, which illustrated ways that eyesight depends on dynamic input from the world. It would be difficult to describe them here without spoiling them. However, a full-length recording of the lecture has been made, initially only for paying members of the SLL, but I believe I might be granted permission for it to be streamed in a manner that would be open to others, including the STLD. If successful, I will post a link on the STLD website and in a future edition of this magazine.

This article is from: