Jonathan Stray has opened a new conversation about measuring accuracy in news reports. Stray, who works at the Associated Press and blogs on the side, comes at the issue with a refreshingly analytical, data-driven perspective. His in-depth post, which I urge you to read, does a couple of things. It summarizes important research:
There seems to be no escaping the conclusion that, according to the newsmakers, about half of all American newspaper stories contained a simple factual error in 2005. And this rate has held about steady since we started measuring it seven decades ago.
And it offers some useful ideas:
We could continuously sample a news source's output to produce ongoing accuracy estimates, and build social software to help the audience report and filter errors.
Stray understands that it's no good to count correction rates without tracking error rates, and vice versa -- you need to know both if you want to assess a news organization's performance. So he imagines a not-too-distant future in which many or most newsrooms sampled their story output regularly to gauge the frequency of errors and encouraged readers to submit (and rank) error reports. With some sort of standardization of both metrics, and if newsrooms could get comfortable with publishing these numbers, we'd finally have a useful yardstick for accuracy in news coverage.
I'm all for Stray's vision. At MediaBugs, we've spent the last 18 months building one engine to power this accuracy-enhancing machine -- the part about "social software to help the audience report and filter errors." It's been a rewarding but difficult quest. So I've spent considerable time thinking about the same issues as Stray, and I'd like to respond to his post by proposing a framework for thinking about the big question here -- which, it seems to me, is, "What's the holdup?"
Why are we still so far away from this vision? I think the answer lies not only where Stray looks, with issues of measurement and methodology, but also in the direction that Jay Rosen pointed us in his recent exploration of the interminable feud between journalists and bloggers.
In short, I don't think this is purely a data problem. It's equally a psychological dysfunction. It's not just the numbers that are hard; it's the feelings.
Here, as I see it, are the feelings in the newsroom that stand in the way of building Stray's accuracy machine:
1. Denial: "There's no problem here!"
Let's start by acknowledging, as Stray does, that there's nothing new about the problem of accuracy in news coverage. We've long known the dismal bottom line from the research in this area. Roughly half of newspaper stories contain errors; only a tiny fraction of those errors ever get corrected. The work Stray reviews to find this data -- much of it by Scott Maier of the University of Oregon -- is the same research I reviewed in starting MediaBugs. It's the same stuff Craig Silverman highlighted in his definitive book on this subject.
These results aren't secrets! They ought to be the baseline for discussion of the issue in every newsroom in the country. Yet time and time again, we find that journalists' jaws drop in disbelief when they encounter these statistics. And when pollsters report dismal drops in public faith in news coverage, the same journalists will fail to see any connection between high error rates and low trust.
In numerous lengthy conversations with journalists, I've encountered a litany of excuses, from "those aren't real errors" to "people just want to read news they agree with." Instead of fixing the problem, we blame the messenger.
Why is the field of journalism in such stubborn denial? Why isn't the profession doing anything about what from any reasonable perspective is unacceptably poor performance?
Journalists routinely declare that their work rests on a foundation of public trust. Yet readers regularly tell us that they don't trust journalists. Something is broken here.
I'd suggest that it's time journalists stop insisting that their readers are confused or stupid or partisan and start getting their own house in order. The first step is simple: admit that the problem is real.
2. Overload: "There's too much on our plate."
At the very moment when every element of journalism -- the business, the craft, the calling -- seems to be undergoing violent metamorphosis, many practitioners view the effort to improve the correction process as an unaffordable luxury.
Why dot your "i"s when the roof is caving in? Is fixing errors just an exercise in Titanic deck-chair arrangement?
It's easy to sit on the outside of organizations in turmoil and tell them what to do. But moments of convulsive institutional change are also opportunities to reform entrenched practices and install new routines.
Far-sighted leaders in newsrooms large and small have already begun to move the correction process from the margins of their work flow to the center. All management is about priorities. Journalists will start to improve their accuracy and win back public trust when their organizations signal them that these goals come first.
3. Pride: "We'll deal with this on our own."
Journalists who admit there's an accuracy problem and prioritize solving it face another mental hurdle that may well be the toughest to leap. The newsroom ethos is usually a competitive one: Individuals and organizations both motivate themselves by trying to beat somebody else. We gauge our success by printing or posting the scoop first, topping the circulation numbers or unique user charts, or nabbing the prize. All this works well enough up to a point. But it gets in the way when we try to deal with a problem whose solution demands humility and openness more than sharp elbows.
Any newsroom that's serious about improving its accuracy needs to accept Dan Gillmor's dictum that "our readers know more than we do" and open up its processes to make use of that knowledge. This means relinquishing a little of the profession's fierce independence. No editor is going to, or should, give up the right to decide whether a correction is warranted each time a problem gets flagged. But the smartest editors will accept that they need to give up the chokehold they've traditionally kept on the process of making that decision.
Climbing The Ladder of Transparency
In the field of corrections as anywhere else, "openness" isn't binary -- it has gradations and nuances. I like to imagine these as a sort of ladder of transparency that news organizations need to climb.
On the first rung of this ladder, journalists readily fix mistakes they learn about and conscientiously disclose and record the details of each fix. (Most newsrooms declare allegiance to this ideal but, sadly, our MediaBugs research shows, the majority still fail to live up to it.)
One rung up, news outlets effectively solicit error reports from their audiences, making it clear that they welcome the feedback and will respond. The Report an Error Alliance is trying to push more news organizations to climb up here.
On the next rung up, newsrooms also willingly expose their own internal deliberations over particular controversies, explaining why they did or didn't correct some issue readers raised and leaving some sort of public trail of the decision. At some publications, the ombudsman or public editor takes care of some of this.
On the final, topmost rung, the news organization will assure accountability by turning to a neutral third party to maintain a fair record of issues raised by the public. This shows external critics that the newsroom isn't hiding anything or trying to shove problems under the rug. This is a key part of our model for MediaBugs.
Accountability Reporting - On Ourselves
Plainly, we've got a ways to go. At the conclusion of his accuracy essay, Stray writes, "I'd love to live in a world where I could compare the accuracy of information sources, where errors got found and fixed with crowdsourced ease, and where news organizations weren't shy about telling me what they did and did not know."
Me too! And I think Stray is correct to say that we won't get there without admitting the seriousness of the accuracy problem, devising standardized accuracy metrics and improving the feedback loop for reporting errors. Yes, yes, and yes.
But since we keep bumping into invisible barriers on the way to this destination, we need to go further. We must put ourselves on the couch. Journalists aren't very good at self-scrutiny, and the hardbitten old newshound in each of us might scorn such work as navel-gazing. Maybe it would help if we think of it, instead, as accountability reporting -- on ourselves.
Photo of the ladder to the sun by Anas Ahmad via Flickr.