What do you think? Leave a respectful comment.

How to read the polls in 2020 and avoid the mistakes of 2016

On Oct. 28, 2016, Berwood Yost had just finished interviews for his last poll before the presidential election. The director of the Center for Opinion Research at Franklin and Marshall College in Lancaster, Pennsylvania, had asked voters in his state if they preferred Republican candidate Donald Trump or Democrat Hillary Rodham Clinton. Like many polls around the country at that moment, his results suggested Clinton could win the critical swing state. Most national polls were pegging Clinton’s lead over Trump anywhere between 1 and 7 percentage points in the close presidential race.

Hours later, FBI Director James Comey submitted a letter to Congress. With less than two weeks before Election Day, he had reopened an investigation into Clinton’s use of personal email for government business in what many now see as a seismic decision — the effects of which weren’t captured by Yost’s poll.

Trump’s win that November not only upended political journalism, which had widely predicted a Clinton victory, it also shook the faith people once placed in polling, even among pollsters themselves. It spawned an attitude that polls and polling writ large are not to be trusted, said W. Joseph Campbell, professor of communication at American University in Washington, D.C.

One critic called Franklin and Marshall’s polling leading up to the 2016 election as “the most erroneous and wildly inaccurate polling released by a polling institution in the lead up to the 2016 POTUS election.”

Contrary to what people might believe, polls are “not designed to predict elections,” said Courtney Kennedy, who directs survey research at Pew Research Center. And they become problematic when we expect too much out of them or expect them to be “election oracles,” in Campbell’s words.

This election, many voters are urging others to not trust national polling that shows former Vice President Joe Biden favored among likely voters and maintaining a double-digit lead in national polls compared to Trump, out of concern that some could take the data for granted in choosing whether to cast a ballot.

But polls play an important role in our lives and politics, if they are used and interpreted correctly.

Why people care about polls

We want to hear about polls for the same reason we tune into our local news meteorologist or open our weather app before leaving the house, said Campbell, who recently wrote “Lost in a Gallup,” a book that explored where polling has gone wrong in U.S. presidential elections.

“We want to know what’s going on, or what’s likely to happen,” he said.

Polling “is the best tool our society has for getting the public’s views,” said Kennedy, who served on a committee tasked by the American Association of Public Opinion Research to figure out what went wrong in 2016. And “they’re really good at giving you a high-level read on issues,” she said.

Thanks to polling this year, policymakers and the public better understand how the pandemic has influenced people’s health, jobs, lives and state of mind, as well as how people think the Trump administration is handling the crisis, she said. They go further than a man-on-the-street interview, she added, but they’re not perfect.

Before we can understand what happened in the last election, it’s important to know a few basics.

First, pollsters need to ask themselves whose story they are telling. Registered voters in the U.S.? Female millennial veterans in Texas? Once they define that, they sample people to reflect the community they’re trying to better understand.

One classic polling metaphor is that you don’t need to eat an entire pot of soup to know how it tastes; you only sip a spoonful. Similarly, you don’t have to interview every single person nationwide to get a sense of who they are.

Pollsters sample in two ways — probability-based methods and non-probability-based methods.

Considered “the cornerstone of modern survey research,” using a probability method means anyone in a target population has a chance to be surveyed. Interviewers in polling call centers rely on random digit dialing of telephone numbers to contact potential respondents. Then, based on how many people out of a total population answered those questions, pollsters calculate a margin of error. That number gives you a range for how close the overall responses come to the truth. The smaller the margin of error, the more reliable your results are. But even with this method, things can go wrong.

Non-probability methods do not select participants at random or include margin of error. They are samples of convenience, such as who might take certain online surveys. Or say you stood all day in front of your local grocery store and asked everyone entering to name their favorite fruit. By sunset, you’d have a good sense of how many customers liked oranges on that particular day at that specific grocery store. But you couldn’t say you knew how many people around the city also preferred oranges.

Good polling is expensive. When few people respond to a poll, the low number of responses can skew the results. If pollsters want statistically reliable results for national surveys, they typically need to be willing to spend several thousands of dollars just to reach enough people.

Social desirability bias — people’s tendency to tell someone what they think that person wants to hear, even if it’s not true — can also distort the picture. Pollsters have to account for these factors when drawing up a questionnaire, thinking carefully about question phrasing and order to avoid bias in potential responses before interviewing the first person.

At the heart of the confusion over polling and the 2016 outcome were both problems that pollsters could have controlled, and some they couldn’t.

What happened in 2016?

Three things happened that steered polling off course, Kennedy said. While most national poll findings were in sync with what voters ultimately chose, not all polls represented their intended population, she said. That was especially true at the state level. In some cases, they did not weight their responses, which means that they did not adjust the “relative contribution of the respondents” so that they accurately reflected the makeup of the population. Two more things — a late swing for Trump among undecided voters in key states, and unexpected turnout for Trump among rural voters that disrupted likely voter models — also caught them by surprise. There’s no way of knowing exactly how Comey’s news affected public opinion, but it’s a great example of how events can shift public opinion — and a reminder that responses to a single poll question are snapshots in time.

Three decades of asking people for their opinions have taught pollster Yost that every presidential election cycle brings its own dynamic. In the case of 2016, it was an “insurgent energy” that drove the election, Yost said. He added that if that year taught him anything, it is that “polls are variable. They’re representative of the time they’re taken and events can change how people are thinking, feeling and ultimately how they will behave.”

In the fallout, FiveThirtyEight, a polling analysis blog founded by statistician Nate Silver, took heat for not having a better sense of how the 2016 race would end. In the lead-up to Election Day, Silver and his team compiled weeks of national and state surveys to craft a composite sketch of the polling landscape and what Americans thought of Trump and Clinton — estimates that were ultimately off. He had anticipated Clinton’s odds of winning the election by 72 percent, but in the course of election night, that assessment changed to Trump standing at an 84 percent chance of winning.

“To be clear, if the polls themselves have gotten too much blame, then misinterpretation and misreporting of the polls is a major part of the story,” Silver wrote in 2017.

2016 did a number on journalists and their faith in surveys, said Al Tompkins, senior faculty for broadcast and online journalism at the Poynter Institute, a nonprofit journalism school and research organization in St. Petersburg, Florida. But perhaps that loss of trust in the numbers was not altogether fair, he said.

“I always contended that the polls in 2016 were not nearly as far off as the interpretations of them were,” he said.

The last presidential election was the nudge many newsrooms needed to get away from horse-race reporting, which focuses on who is winning or losing an election rather than what policies candidates say they would put into place if elected — a welcome change for many journalism industry watchers, Tompkins said. With Americans being more cautious about the meaning of polls coming in, he said journalists and the public need to rethink how they read and interpret polling numbers. The data shouldn’t simply reaffirm what one already believes. Instead, when people see poll results, Tompkins urges them to ask, what does that finding mean? What could explain how that number came to be? How does that number compare to prior trends? What else should it be compared to?

He likened analyzing poll results to strolling down the cereal aisle, and asking a bevy of questions to make sure we know what we’re consuming.

“We’ve become much better consumers about breakfast cereal than about polling,” Tompkins said. “We just don’t ask enough questions. We have to become better consumers. It’s not that hard. We just need to ask better questions.”

How to gut-check a poll

As the clock ticks down to Nov. 3 and millions are already casting their ballots early or by mail, there are a few things to keep in mind when interpreting a poll. Seek out the full results, including the exact questions asked. Get the methodology, which is basically the crib notes for how and when the poll was conducted, who asked and answered the questions and how many people responded.

It can be easy to overread a poll, Kennedy said. If a poll says that a candidate is ahead in a race by three percentage points, it can be tempting to call that candidate a winner. But how big is the poll’s margin of error? That can tell you if you need to wait for more results to roll in.

One trick Tompkins recommends for journalists and anyone paying attention to polls, especially now, is to take that margin-of-error and double it. If the candidate’s lead surpasses that, you can feel a little better saying the candidate is truly ahead among voters.

Political polls should not be mistaken for crystal balls. That doesn’t mean they serve no purpose. With enough snapshots, you can gather a clearer picture and better understanding of what is happening in this country and what matters to people as voters decide who the next leader will be.

Support PBS NewsHour:

The Latest