i-2028dc058a73d21f8092b9cfdd39d76c-USAToday comment copy.JPG

Major media sites have started to get the religion of audience participation, but there’s been one big hitch: How do you harness the audience’s knowledge and participation without the forums devolving into a messy online brawl that requires time-intensive moderation?

Over the years, traditional media sites have tried forums, killed them, and tried them again, this time with more moderation. But still, the unruly aspect of online commentary continues to upset people, as the Hartford Courant’s public editor Karen Hunter recently railed against the “uncivil discourse” on her site’s comments, blaming it on anonymous commenters and calling for a requirement that people use their real names. Then Topix CEO Chris Tolles defended anonymous contributions, comparing unregistered commenters on Topix to those that register and found that while unregistered comments are slightly more likely to violate posting guidelines, three times of all comments came from unregistered commenters.

What has changed in the last year is that major media companies are no longer arguing over whether they should have comments under stories or blogs; instead, the debate is about how they should moderate them and even highlight the best ones in eye-catching editorial spaces. Many sites are embracing the concept of “news as a conversation,” and trying to create active conversations among reporters, editors and readers online. The New York Times released a more robust commenting function recently, where readers can recommend each other’s comments, and there are “Editor’s Selections” for the best comments in a thread. And last weekend BusinessWeek.com started highlighting one commenter per day on its home page, with a photo of the commenter.

Jonathan Landman, deputy managing editor for digital journalism at the New York Times, told me he thinks a balance of positive reinforcement and negative reinforcement is the way to go in moderating online comments. He likes the way Amazon.com gives people special badges when they use their real name.

i-11eedb649a6dc1c39c14907447aaf23d-Jonathan Landman.jpg
Jonathan Landman

“You don’t have to moderate and say, ‘You have to give your real name,’ because you’re already taking down offensive or abusive stuff,” Landman said. “That’s negative reinforcement and we need techniques to give positive reinforcement [as well]. Giving your real name and getting recognized for that is one way. Another example is having the editor’s selections or having people recommend the better comments. So a mixture of positive and negative reinforcement techniques is the way to go.”

The Times has a special “moderation desk” to help all the bloggers and editors who are already moderating comments on their own blogs. Currently, all comments must be approved by humans before being posted on the site. Landman says that the team of four part-timers who were helping to moderate comments has already grown to 11, and he expects the Times to hire more people and train others to help out as comments expand onto other stories. While the Times might experiment with comments that aren’t pre-screened, he wouldn’t expect such threads in controversial subject areas at the outset.

“I think quality is more important than quantity,” Landman said. “You have to create a space where the conversation is the kind of conversation that appeals to the people in your world. There are places where the conversation gets really ugly and people don’t go to the New York Times to get yelled at.”

At BusinessWeek.com, comments are also filtered by people before being posted, though that might change as the volume of commenting goes up. BusinessWeek.com executive editor John Byrne told me the number of comments at the site was up 29% in 2007, and that the site is developing automated filters to help the human moderators. Byrne said his #2 goal for this year — after growing the site — is having the deepest, most meaningful engagement for users.

“We are rewarding our readers who make comments on our site by going to the reader and saying, ‘We like what you’re saying and want to feature it in a prominent way, can you send us a digital picture of yourself so we can put it on the home page?’” Byrne said. “This is about elevating our conversation and giving credence to the idea that the web is a dialogue and not a lecture. The truth is that very few people are delivering on it, having reporters really engage with readers or elevating comments and saying, ‘This is as important as any story we have, any video we have, any audio we have.’”

While Byrne doesn’t mind anonymous comments on the site, he wants to make sure that good commenters are rewarded by having their picture placed prominently on the site — making them as prominent as the authors or subjects of stories. Plus, he has plans to reward the best contributors at the end of the year with a special dinner with him and other top editors at the magazine. (I will be running a longer Q&A with Byrne in a future post on MediaShift.)

Legal Immunity for Moderating Comments

One of the big arguments in the debate over moderating online comments is that if you start to edit people’s comments before publishing them, you open yourself up to liability in defamation cases. It turns out that’s not actually true. The Communications Decency Act of 1996 was largely struck down by the courts, but one important part that remained, Section 230, protects online services from liability for people’s comments even if they are edited prior to publication. The only time a service might become liable is if editors change the meaning of the post and make it libelous.

i-ed9913613fd28a81ac3d5ab40025ee7a-Citizen Media Law Project logo.JPG

David Ardia, a fellow at the Berkman Center for Internet & Society at Harvard University and the director of the new Citizen Media Law Project (CMLP), wrote a great primer on liability and immunity under Section 230 on his blog at the CMLP. According to Ardia, the following online activities are provided immunity from lawsuits because of the law:

> screening objectionable content prior to publication
> correcting, editing or removing content
> soliciting, encouraging or selecting content for publication
> paying a third party to create or submit content, and even
> leaving content up after being notified that the material is defamatory

So with all those safe harbors — including for user-submitted content from the scene of breaking news — why have media companies been so wary of liability over online comments? Ardia told me it’s tough to change the mindset of traditional media folks, who are used to newspaper publishers being liable for everything that’s printed in their pages.

“For people who grew up in another era, it’s a much different way of looking at things,” Ardia said. “In newspapers, the publisher is responsible for everything whether it’s a letter to the editor or a classified ad. That’s been the standard of liability for a long time. There’s a real learning process in the new liability terrain…You do see the courts showing discomfort because it doesn’t feel fair to them. There are these feelings amongst traditional news organizations that this feels strange and unfair. It’s taking some time to settle in.”

Ardia says that sites that pre-screen comments have won all lawsuits in which they were accused of being liable for comments made by people in forums. There’s a pending case against hyper-local site iBrattleboro, with the plaintiff claiming that the site moderates comments and tries to make it a civil environment and therefore should be liable for letting a slanderous comment get through. Ardia believes that the site’s motion to dismiss the case against them — but not against the commenter himself — will be granted by the court.

There are other legal issues that news sites must consider when allowing comments on their sites. Ardia recently reported on the Idea Lab blog that a Kansas University investigator got a search warrant to go through the Lawrence Journal-World newspaper site’s computer servers to get information on an anonymous commenter who gave his opinion about drugs being involved in a recent murder. The newspaper questioned the legality of the search warrant, and the investigator backed down. But Ardia is worried that law enforcement authorities will use search warrants to go on fishing expeditions at news organizations, chilling speech on online forums.

What’s probably most interesting about this case is that after all the brouhaha over the search warrant, the original anonymous commenter came back to apologize online about the comment, saying “I would like to take some time to apologize for any misinformation.” Perhaps the person had anonymous-posting remorse?

Where Do You Draw the Line?

One of the biggest challenges with moderating comments is figuring out which comments to accept and which to discard — and how many hoops you force people to go through to join the conversation. While the New York Times and BusinessWeek both have editors check each post before publishing them, other mainstream news sites rely on Topix for automated and/or human moderation, or a mix of their own moderating systems before or after posting. Most sites will toss out obscene, libelous or personal attacks in comments, and sometimes will ban people based on their IP address.

Scott Anderson is vice president of shared content for Tribune Interactive and pens the excellent Online News Squared blog. Anderson told me that Tribune allows each newspaper (including the Courant) to set its own commenting policy, but that he personally likes more open commenting systems.

i-1c0e1bc86970d309b3a0026fda9f7c5d-Scott Anderson.jpg
Scott Anderson

“I am not a supporter of registration or other prior-restraint gating processes that ultimately only hinder the conversation,” Anderson said via email. “Our role is to activate and engage the conversation, not stifle and control it. Our role is to open ourselves and our sites to all kinds of communities and all kinds of people — not just those who fit our demographic filters or don’t like to cuss or don’t get rambunctious or don’t sometimes just say stupid things just to make a point. We shouldn’t shut out those who discuss topics such as race and — gasp, sex! — and other controversial topics that we shy from in print or vanilla-cize and homogenize to the point of mushy yogurt.”

Tribune is part owner of Topix and uses the startup to help run its newspaper-site forums in exchange for a revenue split from Google AdSense ads that run on those co-branded pages. Not surprisingly, Topix CEO Tolles agrees with Anderson about having looser moderation standards and not pre-screening each comment by hand. Topix has an automated system for screening posts ahead of time, and relies on users flagging problematic posts along with a staff of moderators.

“The real issue here is that the Internet’s real mission is to empower many-to-many conversations,” Tolles told me via email. “The long-term play is thousands of conversations between the people in the forums, not an editorial opinion being foisted on them by a battery of editors. With regard to anonymity — it’s pretty much a misnomer. We know roughly the same about people who post anonymously, as we do about people who register with an email address, and can ban people either way. We’ve found roughly the same amount of abuse from both kinds of people, and all you’re doing with registration is making people jump through hoops. Bad people jump though hoops more or less as much as good folks, at least with regard to commentary.”

Howard Owens, director of digital publishing at GateHouse Media, doesn’t think that sites should make deals to outsource their forums to Topix, saying “the last thing you want to do is turn over your commenting system to a vendor with an express intent of beating you in your own market.” While GateHouse does have some open forums, Owens told me he would like to “fix that” with registration systems and more moderation from journalists.

“I’m a big believer in the conversation,” Owens said via email. “I believe the conversation makes us all smarter, when it’s a good conversation. The great, wonderful beauty of the Internet is that it enables everybody to join the conversation. In order for us to really benefit from the conversation, and not see it crushed by bad actors, [we need] to try and guide that conversation. I think there is a role here for journalists to play in elevating the expectations for that conversation…That’s the high ideal behind what I’m advocating, even as it flies in the face of the wide-open ideals of some digerati.”

Owens, along with the Times’ Landman and BusinessWeek’s Byrne, all talked glowingly about reputation management systems, similar to the ones on eBay and Amazon in which people rate the quality of each other’s comments. Over time, the best commenters gain trust and the sites spend less time on filtering. On sites such as BusinessWeek.com and USAToday.com, those participants eventually gain not only credence but also visibility, with their comments highlighted by editors. Perhaps a reputation system with a mix of human and automated filters — and having positive and negative reinforcement — is the answer to that long-standing conundrum of opening up the conversation online but keeping it civil.

What do you think? Should major news sites pre-screen comments, moderate them after they’ve been posted or use a combination of sticks and carrots for participants? What places do you feel welcome to comment and what keeps you from commenting? Share your thoughts in the comments below.

UPDATE: Paul Massey in comments said that he thought the discussion about liability for comments was not taking into account foreign laws that might impact liability:

Comments written in the U.S. which offend in another country may be subject to the defamation laws of another country where immunity may not apply provided reputation has been damaged in that country. Therefore a U.S. law analysis is shortsighted.

Harvard’s David Ardia responded to me about that via email, saying that the risk of foreign litigation was actually pretty small:

[Massey] raises a good point, but one that I think is of relatively little concern for journalism organizations that don’t have substantial assets outside the United States. There isn’t a clear test that courts will apply when faced with the question of whether to exercise jurisdiction over a foreign publisher on the Internet. Most courts have held that simply making something available worldwide — without more — is insufficient to subject publishers to jurisdiction outside their home country.

Even if a court in another country were to exercise jurisdiction over a U.S. publisher and find liability, the plaintiff would still have to come to the U.S. to seek enforcement of that judgment against the U.S. publisher. U.S. courts tend to be reluctant to do so, especially if the judgment originated in a country that does not have free speech protections commensurate with the First Amendment.

Large publishers that have assets in countries like France (which has Nazi memorabilia restrictions) or the United Kingdom (where defamation law is generally more favorable to the plaintiff) need to be cognizant of the possibility that a domestic court could exercise jurisdiction and hold them liable for the acts of their users. For most publishers in the U.S., however, the risk is quite small.

Related