The post-election news cycle is moving on to what to expect in a second Trump administration and how to cover it. But an important journalistic question, that will take months of study to sort out, lingers: How did polls and those who report on them miss the big picture yet again?
Apologists say the polls didn’t do all that badly. They correctly identified the seven consequential battleground states. Allowing for a margin of error, results for those seven states fell within the range most had predicted. That improved on 2020, when polls showed Joe Biden winning but way overestimated by how much.
One of three savvy political scientists I spoke with said, “It all depends what you mean by close.”
Count me, though, as not buying that the polls’ performance was basically OK.
With some votes still being counted, Donald Trump appears to have won all seven battleground states — and, surprisingly, the popular vote, too. That sure does not sound like the very close race the poll-driven consensus of campaign reporting said would play out. Nor did it drag on for days — it was called election night.
“I don’t think there’s any doubt that the polls and reporting fell short,” Josh Clinton, a Vanderbilt University political scientist who chaired the pollsters’ own self-study group, told me in a phone interview. “They got closer to the truth than in 2020 … but were off by 3% anyway. There’s not a consolation prize in this business.”
To be determined is how much of that failure is, because Trump and his followers evade measurement. But Clinton and other specialists see a different underlying problem. Because of the difficulty in a fragmented media environment of assembling a large and representative survey panel, the polling firms deploy multiple modeling adjustments to come as close as they can. Still, they recruit only 1 to 2% of those they hope to get to participate. In other words, for a group of 800, you would need to ask between 400,000 and 800,000.
Michael Bailey, a Georgetown University professor who earlier this year published a book on the broken polling system, said those adjustments do produce a much more reliable result than traditional methods would. Realistically, though, with many new dimensions added to the mix, new ways to miss the mark materialize, too.
I asked Bailey if journalistic coverage, despite consistently adding a qualifier for margin of error, fosters a misimpression of exactness. While sympathetic to the difficulty of making the math intelligible to a general audience, he said “yes” — the frequency and nature of poll coverage should get some share of the blame.
In fact, Bailey advocates what he concedes would be a radical step: For election predictions, get rid of the terms “polls” and “margin of error” altogether.
“Data-driven model” would better describe findings based on the new slate of best practices, he said, And he would prefer “prone to some errors” to the convention of “margin of error.” (Margin of error is a concept best applied simply as a matter of math to a large random sample, say 1,000, closely matched to what’s being studied. That’s not what election polls do anymore.)
While a full analysis of correctable 2024 flaws will take months or even years, I was surprised to find solid takes on potential trouble had been done before election day. The New York Times, Scientific American and The Wall Street Journal did versions of that story in late October, the Journal’s headlined, “The Pollsters Blew It in 2020. Will They Be Wrong Again in 2024?”
The Times chief political analyst, Nate Cohn, highlighted a downbeat quotation from the study Vanderbilt’s Clinton chaired for the American Association for Public Opinion Research: “Identifying conclusively why polls overstated the Democratic-Republican margin (in 2020) relative to the certified vote appears to be impossible with the available data.”
I see a couple of added potentials for mischief in how polls fit the conventions of campaign coverage. David Karpf, a political scientist at Georgetown University, told me that reporters and editors “more and more rely on them as a thing to talk about for their horse race coverage.” Each fresh poll provides a fresh daily peg.
Similarly, portraying small changes as an important surge — like Kamala Harris’ smooth launch and post-convention glow — makes for better copy than saying the successive poll consensus stayed within roughly the same range. As it did.
Karpf, whose specialty is the internet’s impact on political information, not polling, added that 2028 may bring challenges barely being considered yet. “If voter suppression accelerates significantly … or deportations sweep up qualified voters,” that would be a measurement problem, but a tiny issue compared to the thing itself.
Election night produced some other losers and winners in the forecast game. Among those taking a hit is Allan Lichtman, whose well-covered system for picking a presidential winner relies on his invention of 13 keys, like whether the incumbent is dealing with a scandal or whether the country is at war.
Through 2020, Lichtman had been right in nine of the last 10 presidential contests he forecasted. Not this time. With some humility, he told USA Today that he was embarrassed by his big miss (predicting a comfortable Harris win) and would set about seeing if his system needs to be retooled.
Among the winners was The New York Times “Needle,” a real-time tracker of a probable winner in key states, the popular vote and the electoral vote.
As I wrote earlier in the week, with tech workers on strike, there was a question of whether the Needle could be up and running at all. That went without a glitch. And the Needle delivered, in easy-to-follow graphic form, the story of growing certainty of a Trump victory. By my bedtime of 11:30 p.m., we were at 90%. Good enough for me. Cautious decision desks made their calls later, some at about 1:30, others at 5:30.
The polling establishment has no better option now than to get back to work on a deep dive into 2024 and aim to get the kinks out by 2028. Clinton told me that a successor study group to his started gathering data even before the election.
“I won’t be involved,” he said. “I have wished them good luck.”
Until the media starts doing its own work, which means actually assigning reports to beats and having them out where the action is, they will always come up short. Too many “journalists” sitting at their desks and writing whole stories on a poll gathered by another organization that has no proof at all that its data is even close. This is what happened in the 2024 elections and now after-the-fact pundits like Poynter are sitting around wringing their hands and wondering what the heck happened. Right now you guys are all running in the same pack acting more as advocates than independent reporters. Until that changes, people will be walking around more and more uniformed.