Body + Brain

14
Aug

Why Blind Spots Are Integral to Sight

While observing a star one night, Aristotle noticed that shifting his gaze just to the right or left of the star made it appear brighter, while staring directly at it caused it to dim.

It was one of the first recorded demonstrations of averted vision, a technique astronomers employ to bring celestial objects into sharper focus. Our retinas are made of “cones,” which detect color and are most effective during the day, and “rods,” which are designed for dark conditions since they’re more sensitive to light. Cones and rods are interspersed throughout the retina, but cones are more highly concentrated in the center, which is why we experience a “blind spot” when looking directly at distant orbs in the night sky. To see clearly at night, we need to avert our eyes about two degrees, exposing more rods to the light.

That blind spot is anatomical, and as we’ve discovered, our vision is spotty in other ways, too. In particular, new research shows that the way our brains are wired can render us “blind” depending on the situation.

Virginia Hughes, writing for Only Human, dives into a question scientists and philosophers have been tossing around for…well, centuries. At what point does our visual system integrate information from others like touch, smell, memory, and language? Some researchers believe that our brains analyze information from non-visual systems slightly after basic seeing takes place. A study published yesterday in the Proceeding of the National Academy of Sciences by Gary Lupyan and Emily Ward advocates a different theory—that both visual and non-visual information processing happen at the same time.

Lupyan’s study is notable for the clever way it tapped into our ‘lower level’ visual processing. The researchers showed participants different images in their right and left eyes at the same time. In one eye, they’d see a familiar picture, such as a kangaroo or a pumpkin, and in the other they’d see ugly visual noise: a rapidly changing mess of lines. When these two images are presented at the same time, our minds process only the noisy part and completely ignore the static, familiar image. Previous experiments have shown that this so-called ‘continuous flash suppression’ disrupts the early stages of visual perception, “before it reaches the levels of meaning”, Lupyan says.

In Lupyan’s study, participants sometimes heard the name of the static object — like the word ‘kangaroo’ or ‘pumpkin’ — played into their ears. And on these trials, the previously invisible object would pop into their conscious visual perception. If they heard a different word, though, they would not see the hidden object. “So it’s not that they are hallucinating or imagining a dog being there,” Lupyan says. “If they hear the label, they become more sensitive to inputs that match that label.”

In other words, people saw the “hidden” object when they heard the name of the object. Words they heard influenced what they saw.

But this research could be complicated by another recent finding, this one about vision’s relationship to time. Here’s Rebecca Schwarzlose, writing for Garden of the Mind:

In recent years, a slew of experiments have supported the idea that certain aspects of vision happen in discrete packets of time – and that these packets are roughly one-tenth of a second long. The brain rhythms that correspond to this timing – called alpha waves – have acted as the missing link. Brain rhythms essentially tamp down activity in a brain area at a regular interval, like a librarian who keeps shushing a crowd of noisy kids. Cells in a given part of the brain momentarily fall silent but, as kids will do, they start right up again once the shushing is done.

Unless a tenth of a second really matters to you, this study might not seem all that consequential. But that’s where yet another study comes in, this one by Frédéric Gosselin and his colleagues from the Université de Montréal. They tested the discrete vision concept using facial recognition techniques. Note the similarities between Gosselin’s and Lupyan’s approaches, as described by Schwarzlose:

They made the faces hard to see by bathing them in different amounts of visual ‘noise’ (like the static on a misbehaving television). Subjects had to identify each face as one of six that they had learned in advance. But while they were trying to identify each face, the amount of static on the face kept changing. In fact, Gosselin and colleagues were cycling the amount of static to see how its rate and phase (timing relative to the appearance of each new face) affected their subjects’ performance. They figured that if visual processing is discrete and varies with time, then subjects should perform best when their moments of best vision coincided with the moments of least static obscuring the face.

They found that participants performed better when the static cycled at 10 or 15 times per second, confirmation that the alpha wave does have some bearing on ordinary life. The interplay between these two factors—alpha waves and verbal cues—makes the entire operation seem unnecessarily complex. But there might be a reason for it, Schwarzlose concludes:

Paradoxically enough, discrete visual processing and alpha waves may actually give your visual perception its smooth, cohesive feel. In the last post I mentioned how you move your eyes about 2 or 3 times per second. Your visual system must somehow stitch together the information from these separate glimpses that are offset from each other both in time and space. Alpha waves allow visual information to echo in the brain. They may stabilize visual representations over time, allowing them to linger long enough for the brain, that master seamstress, to do her work.

Given the intricacies of sight, exact computer replication of human vision may be impossible. But it’s not stopping us from trying—watch the video below to learn more.