Intelligence tests have had many uses throughout their history--as tools to sort both schoolchildren and army recruits, and, most frighteningly, as methods to determine who is fit to pass on their genes. But as intelligence testing has made its mark on the educational and cultural landscape of the Western world, the concept of intelligence itself has remained murky. The idea that an exam can capture such an abstract characteristic has been questioned, but never rigorously tested--perhaps because conceiving of a method to measure the validity of a test that evaluates a trait no one fully understands is impossible. IQ tests have proved versatile, but are they legitimate? To what extent do intelligence tests actually measure intelligence?

Reporter and philosopher Walter Lippmann, who published a series of essays in the 1920s criticizing the Stanford-Binet test, wrote that it tests "an unanalyzed mixture of native capacity, acquired habits and stored-up knowledge, and no tester knows at any moment which factor he is testing. He is testing the complex result of a long and unknown history, and the assumption that his questions and his puzzles can in 50 minutes isolate abstract intelligence is, therefore, vanity."

Lippmann criticized the tests over 80 years ago, but already he recognized that people needed to approach their results with caution--advice now made more salient by a number of studies revealing interesting phenomena that validate his and other test skeptics' opinions.

As it turns out, a number of variables, none of which have to do with brainpower, can influence test scores.

For example, many researchers have discovered that people from minority racial groups often perform worse on intelligence tests than their white counterparts, despite a lack of evidence that they are actually less intelligent.

Another study has shown that intelligence is not fixed throughout one's lifetime: Teens' IQs changed by as much as 20 points over four years, raising questions about some modern IQ test uses. Many gifted programs, for example, use IQ tests to select student participants. But what does it mean if, over their school careers, students who fell below the cut-off grew more "gifted" than those in the program? Should they have been denied the enrichment opportunity, even though they later were revealed to be just as intellectually capable as the students who were allowed to enroll?

External cues that influence one's self-perception--such as reporting one's race or gender--also influence how one performs on intelligence tests. In a blog post for Scientific American, writer Maria Konnikova explains, "Asian women perform better on math tests when their Asian identity is made salient--and worse when their female identity is. White men perform worse on athletic tasks when they think performance is based on natural ability--and black men, when they are told it is based on athletic intelligence. In other words, how we think others see us influences how we subsequently perform."

If one's performance on IQ tests is subject to so many variables outside of natural ability, then how can such tests measure intelligence accurately? And does one's level of innate intelligence even matter? Is it correlated with success?

In one study, Robert Sternberg, a psychologist at Tufts University, found that in the Western world high scores intelligence tests correlated with later career success, but the people he tested live in a culture that places enormous emphasis on achievement on such tests.

Imagine a student with great SAT scores who later goes on to excel in her career. One could say that the student was very smart, and her intelligence led her both to succeed on the test and in her career. But one could also say the student was particularly skilled at test-taking, and since she lived in a society that valued high test scores, her test-taking ability opened up the door to a great college education. That in turn gifted her with the skills and connections she needed to succeed in her chosen field.

Both of these scenarios are over-simplifications. Intelligence--however murky it may be and however many forms it may come in--is undoubtedly a real trait. And intelligence tests have persisted because they do provide a general way to compare people's aptitude, especially in academic settings. But from their invention, intelligence tests have been plagued by misinterpretation. They have been haunted by the false notion that the number they produce represents the pure and absolute capacity of someone's mind, when in reality studies have shown that many other factors are at play.

Donna Ford, a professor of Education and Human Development at Vanderbilt, writes, "Selecting, interpreting and using tests are complicated endeavors. When one adds student differences, including cultural diversity, to the situation, the complexity increases... Tests in and of themselves are harmless; they become harmful when misunderstood and misused. Historically, diverse students have been harmed educationally by test misuse."

If current intelligence tests are subject to factors outside of intelligence, can a new assessment be developed that produces a "pure" measure of innate intelligence? Scientists are starting to examine biology, rather than behavior, to gain a new perspective on the mind's ability. A team of researchers recently set out to understand the genetic roots of intelligence. Their study revealed that hundreds or thousands of genes may be involved. "It is the first to show biologically and unequivocally that human intelligence is highly polygenic and that purely genetic (SNP) information can be used to predict intelligence," they wrote.

But even if researchers discover which genes are related to intelligence, a future in which IQ tests are as simple as cheek swabs seems unlikely. Most contemporary intelligence researchers estimate that genetics only account for around 50 percent of one's cognitive ability; the environment in which one learns and grows determines the other half. One study found that biological siblings, when raised together, had IQ scores that were 47 percent correlated. When raised apart, that percentage dipped to 24.

And, though world knowledge and problem-solving skills are commonly tested, the scientific world has yet to come to a consensus for a precise definition of intelligence. Perhaps such a definition is impossible.

So perhaps Galileo would have stolen Mozart's prestigious preschool seat. But that preschool admissions officer would likely whack herself in the head a year or two later as she heard of the prodigious star wowing audiences with his musical compositions.

Discarding intelligence tests altogether might be too harsh a reaction to their flaws. After all, they are useful when interpreted correctly in the right settings. Instead, we need to understand the tests' limitations and their potential for misuse to avoid determining a person's worth--or future trajectory--with a single number.

blog comments powered by Disqus