The failure of almost all the public opinion polls to correctly predict the winner in the 2016 presidential election is disturbing and perplexing. But it provides an opportunity to look at an alternative method of polling that has worked in the past, and that I took part in as a graduate student in Columbia University half a century ago.
This time around, the pre-election Monday morning quarterbacking regarding the almost universal predictions that Hillary Clinton would win the presidency by anywhere from 3 to 6 percent points the finger at the usual suspects: people using cell phones, their refusal to talk to pollsters, the difficulty of making contact with young people or people without phones, the inability to predict who is a likely voter and who is not.
Pollsters – and there are now many of them – have been talking about these difficulties for several years, and they are real. Some argue they missed this election by just three points – an acceptable margin of error. But they missed it, and almost all in the same direction. They didn’t pick up what was a very angry, very fed-up, mostly white electorate. They had a hard time quantifying the willingness of some voters, including women, to ignore statements or behavior that could be outrageous and vote for him anyway.
In the primaries, the polls completely missed the Donald Trump phenomenon; nobody took him seriously or gave him a chance to beat the big Republican field. In a speech a few months ago, Stanford University political scientist Bruce Cain said that it was baffling and disturbing how ignorant the polls were of Trump’s appeal. Somehow it slipped by them, didn’t register in their samples. Even if the general public and the media didn’t take Trump seriously, you might have thought some opinion polls would have picked up his growing popularity or potential.
Perhaps the real problem is that polling these days is done in a pseudo-scientific way, by modeling the electorate and trying to get a sample that matches the voters. Then the people working for the pollsters make their phone calls and read a script and a series of questions that can be tallied mathematically and then projected by technological whiz-kids into a picture of how the electorate will react. Do any of those phone jockeys or those highly touted polling experts ever get out in the country and meet the voters?
I did. And so did my boss, the esteemed Samuel Lubell, a writer and public opinion analyst who analyzed Harry Truman’s surprising defeat of Thomas Dewey in 1948 for the Saturday Evening Post. Lubell, who taught public opinion reporting at Columbia, pioneered a technique that started by looking closely at election results in individual neighborhoods, and looking at the demographics of that area. But then – with a solid statistical base –he would select towns and streets to do his on-the-ground research. He would go to those areas and talk to the voters in person, and he had me do the same thing.
I would take weekends off from my studies and trek through the streets of carefully chosen towns on Long Island or in Queens or New Jersey, ringing doorbells at random, or striking up conversations with people out watering the lawn or washing their cars. I wouldn’t take notes; that, Lubell said, interfered with the conversation. We would just talk, with me asking a few key questions including how they voted the last couple of presidential elections. The conversation might go on for half an hour.
Then, I’d go down the block and lean against a telephone poll or sit on the curb, and write my notes: How old was the person? How did he vote in the past few elections? (We had a simple code for that.) What troubled him? What did he think about the issues of the day? What religion was he or she? (Did it bother him or her that John Kennedy was a Catholic?) Anything I could remember from the conversation, including quotes that would illustrate his opinions. On a good day, I might talk to as few as eight voters. But they were serious, in depth conversations.
Then I’d give my notes to Lubell, who would use them, not just to form a profile of the neighborhood or the precinct and fit it into a statewide or national pattern, but to get a pulse on the public. He would use the quotes we gathered to illustrate his conclusions and observations, in writing his syndicated column about the coming election.
In his highly praised book “The Future of American Politics” (Harper & Bros, 1951), he wrote “I have used election returns as tracer material, akin to radioactive isotopes, through which the major voting streams and trends in the country could be isolated and followed…from election to election.” He gobbled up census data and economic, religious, cultural and political characteristics, and then he did what few pollsters do today: “I spent many months traveling through the country, visiting strategic voting areas and talking firsthand to voters in every walk of life. “
His predictions for elections were invariably accurate, and his analysis of elections were insightful and on target.
When Lubell died in 1987, The New York Times quoted Richard Scammon, director of the Election Research Center: “He was a political pollster in a personal way in that he stopped pontificating and went door to door. He had a real feel for people and made a great contribution.”
I have a feeling that Sam Lubell would not have missed the Trump ascendancy in the primary season, and that he would have discerned the groundswell for him as he battled Clinton. What went on in the election was subtle and perhaps hidden; did people want to trust a telephone-based pollster whom they couldn’t see with their hopes and fears? How can a questionnaire recited from a phone bank penetrate a troubled potential voter?
Maybe today’s head-scratching pollsters should look at the work of Sam Lubell, get out of their offices and hit the neighborhoods.