Computers That Learn Human Languages Also Inherit Their Implicit Biases

We all have prejudices, whether or not we admit it. Psychologists tend to think of them as fundamentally human flaws—blinders resulting from thousands of years of culture and evolution.

But new research suggests that when a computer learns human language through a technique called machine learning, it will inevitably learn those implicit biases, too.

Computer scientists from Princeton University discovered that when they subjected a computer to an English language-learning algorithm, it demonstrated word associations that betray bias. For example, they found that female attributes were less associated with scientific words like “technology,” “physics,” and “NASA,” and that African American names were more likely to be associated with unpleasant words like “abuse,” “murder,” and “sickness.”

python-machine-learning_1024x576
Python, shown here, is a popular language used for machine learning.

Here’s Christopher Groskopf, writing for Quartz:

Machine-learning algorithms can only learn by example. In this particular case, the researchers taught the algorithm using nearly a trillion words of English-language text extracted from the internet. The algorithm was not explicitly seeking out any bias. Rather it simply derived understanding of the words from their proximity to one another. The associations the algorithm learned are, in some sense, the literal structure of the English language, at least as it is used online.

To further drive this point home, the authors compared the strength of associations between the names of different occupations (“doctor”, “teacher”, etc.) and words indicative of women. (“female”, “woman”, etc.) Astonishingly, that simple association predicts very accurately the number of women working in each of those professions. Chicken-and-egg argument aside, it’s remarkable how effectively an algorithm which knows nothing about jobs or work effectively reconstructed an important dimension of human social organization.

Since machine learning is driven by example, it makes intuitive sense that computers would learn to interpret our subjective biases as linguistic fact. But as Groskopf notes, the machine’s ability in this study to accurately predict our social order is striking.

He also cites a recent ProPublica investigation that elucidates the effects biased algorithms have on our everyday lives. Algorithms themselves, it seems, aren’t going to solve everything. The people who craft those algorithms, on the other hand, will be held responsible for their effect (positive or negative) on society.

Of course, this study only looked at the English language, which means that more work is to be done. But machine learning can have ethical ramifications outside of language, too. For example, as Kelsey Houston-Edwards reported for NOVA Next, it might soon have a more immediate say in who lives—and who dies.