A now notorious study on same-sex marriage underscores a frequent newsroom reality: Political polling or a piece of academic research arrives and is by and large blindly passed along to readers, viewers and listeners.
If it’s seemingly headline grabbing, like the derided study on whether gay canvassers could change voters’ views in fundamental ways, the “news” value rises.
And in an era in which social media and competition make speed the frequent priority, there can be more of a chance that bad research is transmitted without much double-checking of methodology. Most newsrooms simply aren’t equipped to scrupulously double-check, and often not inclined if the origin of the research seems to be a reputable organization or individual.
“It’s a huge concern,” Bill Marimow, editor of the Philadelphia Inquirer and a Pulitzer-Prize winning investigative reporter, told me. “It is fraught with peril to look at study and go on the air or on the Web.”
Many prominent news organizations took note of the gay marriage study, which claimed to demonstrate how voters could be persuaded by talking to gay people. It was published in the respected journal Science and thus carried a distinct and inherent legitimacy for many journalists.
But Donald Green, a prominent Columbia University professor who agreed to be a co-author, has now asked the journal to retract the study. That happened after serious questions were raised about the handiwork of the primary researcher, Michael LaCour of the University of California, Los Angeles. The flap proved notable enough to merit front-page coverage Tuesday by the New York Times.
Several media organizations, including National Public Radio and Vox, have conceded mistakes and noted the significant frailties of the controversial paper.
“I think there’s a bias in academic journals and in the press towards big, surprising results,” said Chris Blattman, an economist and political scientist at Columbia University.
“This is understandable, since it’s by definition news. I think one lesson from studying the science of replication is that most large findings are false,” he said as he pointed to work by Stanford University’s John Ioannides about what Ioannides finds to be frequently poor and redundant studies.
Marimow had clearly thought long and hard on the general topic and has distinct recommendations for reporters and editors.
“On any kind of complex subject that requires expertise beyond the professional expertise of a journalist, who may have been on the beat for only a year or two, I recommend assembling a panel of experts,” he said.
For example, say the subject is a sensitive issue of legal ethics. He’d find lawyers in town who are considered experts, perhaps even written a textbook on the topic. You should approach them, get their views on the matter and also ask for the names of several others for whom they have great respect.
If it were about science or medicine, it would very much be the same.
Take the subject of prostate cancer, which itself is a tricky subject. He’d call Patrick Walsh, a pioneering surgeon at Johns Hopkins in Baltimore. He would ask if Walsh had read the piece, what he thought of it, and then elicit suggestions for what the reporter might read to get a solid grasp of the matter.
“Then I would say, ‘Who are four or five other surgeons with a range of opinions whom I can call?’ I’d thus assemble a panel of five or six people who are nationally or world renowned.”
Without going down a roughly similar path, the perils for journalists are clear, he says. “The concern is that it is very easy for a journalist who has not spent a career focusing on one subject to be misled by data that has been massaged for a particular cause, not just the truth.”
Marimow didn’t paint with too broad a brush and noted that there are, for example, very solid science writers who may get supposedly important studies in advance and take the time to investigate. They should be driven by a natural skepticism and a quest to find a range of responses, “so you can provide meaningful analysis rather than” be a stenographer.”
“But this doesn’t happen often,” he said, referring to the huge amount of research that arrives in newsrooms.
In talking to him, I had to privately wonder how often I have screwed up. At the same time, I know there are limits to what most journalists can do, as I recently learned.
I had written about a Pew Research Center study on the state of the media. I later was informed later by Pew of “a correction we’ve made that affects the sentence from your piece.” The actual lines in question were these:
“Even among the top 10, though, total website and associated app audience varies dramatically – from roughly 130 million at the Yahoo-ABC digital network to just over 50 million for the U.K.-based Daily Mail. At the bottom of this top-50 list, The Dallas Morning News attracted 7 million visitors in the sample month of study.”
Pew was subsequently forthright enough to indicate, “The data in this table: has been corrected to include an earlier omission. NJ.com has been added as no. 41 on the list, and entities after that each moved down one ranking. Thus, the list now includes 51 outlets but all findings are still based on the top 50. “
It apologized. But how was I to have possibly known in the first place?
Thus, what’s a reporter to do on far bigger slip-ups in research?
“The right reaction (and way to report) might be to say ‘there’s a chance this isn’t true, even if they did everything right,’” said Blattman. “Some articles do this, but more don’t.”
“There are also some warning signs—basically risk factors that are associated with findings that could be false. These include small samples, people running lots of studies on a similar question, etc. Naturally big findings are going to randomly and honestly pop up in these instances.”
Blattman concluded, “Every time I see a flashy news headline about science in The New York Times, I think to myself ‘I bet the sample size was 26.’ If journalists just stopped taking small sample studies seriously that would be a big win.”
Comments