TOPICS > Science

How We See

December 25, 2002 at 12:00 AM EDT

TRANSCRIPT

TOM BEARDEN: Like most human beings, these students changing classes at Duke University use their eyes to move through the physical world to stay on the path and avoid bumping into trees or other students. Seeing is something we take for granted, because it seems so instinctively obvious. We know a tree is a tree, a building is a building, that they are two different objects separated in space, and one is closer than the other.

It may come as a surprise to learn that we don’t know any of those things based on the direct evidence of our eyes. That’s because the eye doesn’t think. It simply gathers light from a scene, which passes through the lens and forms an upside-down image on the retina at the back of the eyeball. The photons in that beam of light contain no information about how far they traveled, where they came from, and the image on the retina is two- dimensional. In other words, it’s ambiguous. It doesn’t carry a label to tell you what it is. For example, this computer-generated image looks as if it’s made up of different colored blocks, pyramids, and a transparent plane of glass. But this shape, and this one, when seen by themselves, are actually identical.

Something in the brain decides that this shape is part of the side of this yellow block, while the identical shape nearby is the narrow space between a plank and a pyramid. Even after a century of research, scientists still don’t understand how the brain does all that nearly instantly and completely automatically. Finding out how exactly human visual perception works could lead to better treatments for brain disorders or brain damage, and possibly vastly more useful robots. But the first challenge is to understand the brain. Peter Tse is an assistant professor at Dartmouth College in New Hampshire.

PETER TSE: There are some deep unanswered questions about the brain. The brain is in the last great frontier of scientific research, I would say.

TOM BEARDEN: In the past, neurobiologists have sought understanding in the individual components of the brain and nervous system; they’ve dissected the eye, isolated nerves cell by cell, attached sensors to measure electrical current. More recently, they’ve used magnetic resonance imagery, which allows them to see which parts of the brain are activated when the eye is stimulated by different kinds of images. But some researchers think it’s time to take a different approach. Dale Purves, a professor at Duke University Medical Center, believes that studying the brain’s components cell by cell, the so-called “bottom-up method” is very useful, but too slow.

DALE PURVES: Building up a sense of what the basic strategy and vision is by looking at individual cells and their properties just wasn’t going to get very far very soon. That’s an opinion that’s not held, I think, by very many people.

TOM BEARDEN: Traditionally, neurobiologists have looked at the problem from the bottom up. Purves has adopted a method psychologists first used 50 years ago. He’s experimenting with how the brain actually processes the ambiguous information it gets from the eye.

DALE PURVES: In these recent years, we’ve taken the opposite approach that it’s interesting and important, and ultimately, I think, very useful to start with the end product and work back and try to figure out what the strategy and vision is by that means.

TOM BEARDEN: One way Purves and his colleagues decided to look at the end result of the visual system was to explore how the brain can be fooled. Optical illusions lend support to the idea that the brain evolved to perceive things not necessarily as the retina sees them, but as experience has shown they usually are in the real world.

PETER TSE: When you’re on the highway and you see a police car behind you, say, you see a light flashing here, here, here. It looks like a light is jumping back and forth, but you know cognitively that there are a lot of light bulbs jumping back and forth between those two partitions, and nonetheless, you see a motion. That’s an apparent motion. Why would your brain create this illusion? It is, in a sense, a mistake, because there is no motion in the world. The reason it creates this motion is that it is taking a shortcut. It operates under the assumption that when there’s a light here, and then a light nearby shortly thereafter, that it’s a single object moving. Now, in the case of the police car, it leads to a false perception, which is to say, an illusion, the illusion of an apparent motion. But in the vast majority of circumstances, this is a shortcut that allows the brain to see the motion that’s actually out there in the world.

TOM BEARDEN: Tse says the brain developed these shortcuts because perception has to be nearly instantaneous. Both Tse and Purves believe the reason the brain developed this way can be explained by natural selection, evolution over millions of years of trial and error.

DALE PURVES: You see something, you respond to it, and you’re either right or wrong. You reach up and grab something that you think is there at a certain distance and you’re either successful or not. Through that experience, the visual brain has over the millions of years of our evolution built into itself this frequency distribution, which is really just how often a stimulus has turned out to be one of the infinite number of possibilities that it could have been.

TOM BEARDEN: Tse agrees.

PETER TSE: The brain is correct most of the time, because those animals that didn’t get it correct were eaten, or fell out of the tree and died. So our ancestors are the winners in a sense. They’re the ones who were able to have children, and they saw the world more correctly than the ones who did die.

TOM BEARDEN: In other words, any animal that didn’t correctly interpret what was really there in the environment might not survive, like perceiving an image as a shadow rather than a hole in the ground, and falling to its death as a result of the error. Purves and his colleague, Beau Lotto of University College in London, say there’s a lot of visual evidence to support the theory. This illusion looks like a pair of tiles meeting at the edge. Because of the shading at the edge, they seem to be lit from above, and consistent with that, the top tile looks darker than the one on bottom. Cover up the edge where they seem to meet, and you can see the top and bottom are exactly the same shade of gray. Take the mask away and the top looks darker again; the bottom brighter, because that’s what the brain learned to see over the millennia.

DALE PURVES: It’s an example of how the context dramatically changes what you actually see, and how the sense that we have that we’re seeing the real world is in fact a mental construct that’s predicated on the statistics of past experience: This one being more strongly illuminated than this one. The brain automatically generates the percept of these surfaces as being very different.

TOM BEARDEN: The ambiguity of vision is hard to notice until you start to look for it. Then suddenly it turns up everywhere. In Purves’ own office, there’s no way to tell by retinal image alone whether these picture frames are rectangles or trapezoids; whether the eyeholes in the skull are three dimensional holes or just a pattern of light and shadow, whether the curved leaves on the potted plant are concave or convex, or whether the different shadings on the top and sides of this block are caused by light from above or just different shades of paint. Purves and his team are further testing the theory that visual perception is a learned phenomena by seeing if a computer can learn the same behavior. But while Purves pursues a top-down approach investigating the outcome of visual perception, Tse still sees great value in looking at the problem from the bottom up at the same time. He’s experimenting with magnetic resonance imaging. He shows test subjects difference projections, and the MRI simultaneously shows how their brain react.

PETER TSE: So what you see here is the test case. You see the bars shooting back and forth. That is, in a sense, an illusion because the bar in between the square is, in fact, coming on at once. Nonetheless, you see it shooting back and forth. I’m trying to understand why your brain constructs this precept based on these stimuli. Next, you’ll see a control case, where the stimuli come on. It’s exactly the same as the test case, except there’s a minor difference, namely that the central bar is not now touching the two squares and you get much less of a sense that you get a shooting motion or a smooth animation.

TOM BEARDEN: Tse points out that different parts of the brain react; the sections outlined in orange and blue he knows are dedicated to processing motion and shapes. The hypothesis is that these different circuits must be talking to one another to perceive a difference between the stimuli.

PETER TSE: So the theory we were testing was that the illusion we saw was due to an interaction between areas that process visual form and areas that process visual motion.

TOM BEARDEN: Tse says there is potentially enormous value in knowing precisely why humans see the way they do.

PETER TSE: It really comes down to trying to understand who we are, why we’re here and the nature of nature — the nature of reality. This is the fundamental driving question of science, and this is the beauty I think of science. It’s a fundamental human capacity to wonder, to wonder in both senses, to wonder why and to wonder in the sense of feeling awe. And I think that science at its best is driven by these motivations. Then there are also practical consequences, knowing how the brain works. We’ll be better able to help patients with brain damage, brain lesions or disorders, whether depression or bipolar disorder, or what have you. And this combination of the desire to know for its own sake, and the practical application for medical uses or what have you, is all good.

TOM BEARDEN: Understanding human vision may also eventually lead to effective artificial vision systems for computers and robots. At the Massachusetts Institute of Technology, they’re trying to program robots to follow movement. The hope is that machines will eventually be able to recognize specific objects, and to interact with the real world in much more useful ways than possible today. But both Purves and Tse agree that absent any breakthroughs, a thorough understanding of human visual perception lies many years in the future.