Pollsters have been popular pinatas this week for “missing” the Trump movement.
But many of these critiques have a key flaw: They fail to acknowledge margins of error built into political polls across the United States. When a poll includes a three percent margin of error in a two-person race, a candidate must have a six-point lead on his or her rival to be a clear frontrunner (this changes somewhat for races with more than two candidates).
A quick look at the national tracking poll from Real Clear Politics shows that gains have seldom been wide enough to overcome the margin of error since the conventions. Over the last week, margins from Real Clear Politics showed exactly what happened: a tightening of the race to a virtual tie.
Course from NewsU: Understanding and interpreting polls
But many journalists and media observers are mistakenly over-simplifying polling results. Often, it’s not the poll that’s unreliable, but the person interpreting it. Rather than reporting results as estimates that have an error rate, journalists represent the data as scientific measurements with the accuracy of a thermometer that can take the public’s temperature on any issue.
“I saw some polls that had samples as small as 400 respondents,” said Susan MacManus, a political analyst and professor at the University of South Florida. Small sample sizes give rise to larger margins of error, and results get less reliable when analysts drill down to specific issues and races. When reporting these results, journalists need to ask critical questions about the sample size.
Some pundits on Wednesday speculated that polls “missed” a surge of Trump support that could have stemmed from the FBI’s renewed interest in Clinton’s emails. But exit polls showed that 75 percent of voters made up their minds more than a month ago. That means the barrage of attack ads on TV and the flurry of fly-around celebrity visits to swing states in the final days may be losing their effect as well.
Another factor that played into this election has to do with “the social desirability index.” It could be that polls under-sampled non-college educated men who supported Trump. Failure to understand this element could result in overconfidence of polling.
“We knew all the time there was a silent Trump vote,” MacManus said. “Some of my students who worked on the ground in the campaigns said it could have been as high as five percent of the people who voted for Trump but would not tell others because they didn’t want to be demonized.”
So, what’s the alternative? A better use of polling data might break from the “horse-race” and focus on issues voters say they care about. If journalists had done that, they might have discovered that Trump’s “change” message resonated much louder than Clinton’s central theme of “temperament.”
Exit poll data can shape reporting in the months ahead. Now would be a good time to unearth specific problems Trump supporters have with Obamacare. More than 80 percent of Trump voters think Obamacare ‘went too far,’ according to an analysis from ABC News. Meanwhile, half of Clinton voters think it didn’t go far enough.
Demographic margins within the polling data might also hint at fertile ground for reporting. Exit polling from Tuesday points to the largest gap between men and women voters in more than 60 years. How will that affect the priorities of lawmakers?
That’s not to say polling couldn’t be improved upon. The practice of “weighting” — giving a single person greater statistical importance in a poll — is particularly frustrating, MacManus said. If “weighted” respondents are atypical, they can throw polls way off.
Pollsters use weighting when they’re having trouble getting a large enough response from certain groups, but this can be perilous — the National Council on Public Polls says weighting polls to reflect party affiliation is especially perilous because Americans change parties year to year.
In the years to come, for example, pollsters might also refine their questions to more accurately identify likely voters. These voters have shown that they may jump parties or choose to stay home next time. In the weeks to come, pollsters will be pouring through data to figure out who showed up at the polls and who stayed home. Exit polls will help pollsters understand why turnout was low, and that information will shape who they talk to in 2020.
And pollsters will have to find a way to reliably reach the public online, not on the phone. Early on, pollsters noticed that Trump got more support on automated polls compared to polls where Trump supporters had to answer questions from live pollsters.
“People are concerned about their privacy, and they are not going to tell anybody how they are going to vote, especially when the election is as controversial as this one was,” MacManus said.