Code of Conduct: Lessons Learned So Far

Rather than responding in detail to the many comments on my Draft of a Bloggers’ Code of Conduct or the earlier Call for a Blogger’s Code of Conduct, as well as some of the thoughtful discussion on other blogs, I thought I’d summarize some of my chief takeaways from the discussion so far.

These include:

Since this post is so long, I’ve put my extended comments “below the fold.” Click the link below to keep reading, or use the links above to jump to a specific section.

“We don’t need no stinkin’ badges” – or do we just need better ones?

weasley reasoning's no sheriff here badgeA number of commenters have been unable to resist a nod to Humphrey Bogart’s famous line from The Treasure of the Sierra Madre, and to be sure, the image based on a sheriff’s badge appears to have been ill-chosen. It framed the issue wrongly, suggesting suppression of bad behavior rather than encouragement of good behavior. I really liked Philippe’s suggestion: “I would prefer a positive image as a symbol of respect between bloggers instead of a symbol of repression.”

The Sheriff’s badge was also a bad image both because of its local cultural context (the American west) and because it might imply that the internet as a whole is some kind of untamed frontier. In my conversations with mainstream press, I’ve been at pains to suggest that the internet is no worse than other media — you have only to look at the excesses of political talk radio to see uncivility as bad as anything online. Nonetheless, as a number of people pointed out, it wasn’t a good idea to reinforce the tendency of mainstream media to exaggerate internet risks.

I have to confess that we didn’t put as much thought into the images as we should have. We were a little rushed by the timing of the New York Times story, and wanted to put something up for people to react to. And they certainly did. While people might have reacted as strongly were some other image chosen, it seems that the framing of the issue may have contributed to the negative reaction by many people.

CC by logoCC noncommercial logoCC share alike logoThe original idea was simply for a shorthand mechanism akin to that provided by Creative Commons for sites to state their copyright policies. A site can simply use the logo(s) with a link to the license text, rather than reproducing the entire text somewhere on their site. For example, the three logos to the right are a shorthand way of saying that a document is available under the “attribution required, non-commercial use, share-alike” license terms.

I’m particularly perplexed by folks like Jeff Jarvis saying (in his entry No Twinkie Badges Here): “And when I moved into the place that is my town, I didn’t put up a badge on my fence saying that I’d be a good neighbor (and thus anyone without that badge is, de facto, a bad neighbor). I didn’t have to pledge to act civilized. I just do.” A quick look at buzzmachine shows that Jeff does in fact have just such a “badge” on his site. In fact, he has two. It’s just that they are text badges rather than graphics. There’s one prominent link entitled Rules of Engagement that states “Any email sent to me can be quoted on the blog. No personal attacks, hate speech, bigotry, or seven dirty words in the comments or comments will be killed along with commenters.” And there’s another one entitled About me /Disclosures that lists all of Jeff’s financial entanglements.

Many sites have such disclosures and statements of policy. (In addition to the Blogher Community Guidelines that we modeled our draft on, take a look at the Yahoo Answers Community Guidelines, FM Publishing’s Author Mores, Wikipedia Policies and Guidelines, Amazon’s Guidelines for Reviewers and Dan Gillmor’s Principles of Citizen Journalism.) My goal here was to propose a system that would make it easier for sites to state their policies without having to write their own. There’s no intent to create a single code that every blog is somehow supposed to sign up for, any more than the idea of Creative Commons is to say that every site must abide by their own policies. (See Mechanism, not policy, below.)

The “code of conduct” needs to be much more modular

Where we fell down, apart from the negative framing given by the Sheriff’s badge, is the lack of granularity in the proposed assertions and associated images. There are actually several different values that a site might or might not want to express. For example, a site aspiring to a higher level of journalistic integrity might want a logo that linked to a statement of their fact checking policy; a site that allows anonymity for good reasons might want a logo that links to their commitment to protecting the identity of posters; a site that wants to enforce civility might want to say so. The advantage of a widely agreed-on set of “rules of engagement” with associated logos is that people don’t have to read someone’s “terms of service” to understand what the policy is on a given blog. It’s conveyed by shorthand via a symbol.

As with Creative Commons, for shorthand to be useful, any proposed symbols need to point to individual policies rather than to an aggregate. We made a mistake by lumping a bunch of things together that need to be treated separately. Copyright, for instance, has little do with civility. And it was a particular mistake not to make anonymity an optional element. (I caught this as soon as I posted the first draft and mentioned that in the comments the next morning, but left the draft to stand as there were already many comments on the subject.) But anonymity is a complex subject, so more on that in a moment.

I’m hopeful that we can isolate particular axioms, so to speak, that a site might want to assert. I’d be delighted if some of the very smart people reading this blog would propose their own list of modular axioms that a site might want to assert about its policies. I’m also going to spend a bit more time thinking about how to frame this idea more positively, with logo buttons that are less charged and more functional, and more specific as shorthand for particular policies. If you want to help with this effort, please go to the discussion page for the draft code of conduct over at blogging.wikia.com/bcc.

It’s possible, though, that it will be very difficult even with a set of modular axioms to create the outcome we want through a set of policy statements. Gail Ann Williams’ pointed me to the Well’s Online Moderator Guidelines and Community-Building Tips with the comment “After 15 years in management at The WELL, in a context where there is close to no anonymity, paid participation, and twenty two years of debate about what Stewart Brand’s famous WELL aphorism, “You Own Your Own Words” or YOYOW really means to participants and volunteer conference hosts, some things that seem simple turn out to be more complex.”

Mechanism is better than policy

I picked up on Kaylea Haskall’s original suggestion of Creative Commons’-like logos because it seemed like a way for sites to easily articulate their standards (assuming we get the standards correctly modularized.) However, even better would be actual community moderation mechanisms that would allow the community of readers to flag comments that they think are inappropriate, much as is done on Craigslist or eBay.

Slashdot’s moderation system may also be a good model. It uses community moderators to promote or demote comments to help discriminate between valuable comments and noise, but allows the reader to set his own threshold for what he or she wants to see.

So rather than a blogger’s code of standards, perhaps what I ought to be calling for is moderation systems integrated with the major blogging platforms.

John at librarything wrote:

“One technical suggestion, employed by my employer: letting users flag inappropriate comments, which then become click-to-see. This lowers the visibility of the trolls, without censoring them. For an example, see this
thread:

http://www.librarything.com/talktopic.php?topic=8702

Message 5 is no longer immediately visible, because it was flagged by a certain number of users as inappropriate. But it can still be seen, if you want to, by clicking on the ‘show’ link. It’s a compromise, but perhaps a practical one.

Similarly, it might help the situation to let users configure whether or not they want to see flagged content, and set the default for flagged content to some sort of reduced visibility.

I really like this, as it addresses one of the biggest hesitations I personally have about deleting comments, namely that deleting part of a conversation can make it impossible to reconstruct what really went on. And there have also been problems in the past with blog owners selectively editing conversations to present themselves in the best possible light. A mechanism that preserves comments while hiding them “in the back room” so to speak would seem to me to be a really useful tool.

I immediately wrote John about the availability of the code, and he said it was from “LibraryThing’s groups/talk section, which was built in-house. Tying comment visibilty to user flagging was added last year, in response to a spate of abusive behavior.” But more importantly, he added, “Creating blog plugins for this is a great idea,” and offered to help anyone who wanted to do it. I’ve introduced him to the folks at Movable Type, WordPress, and Blogger, and hopefully we can get something going.

Comment moderation by the community of readers, especially when offensive comments are not deleted but merely made less visible, seems to me to be much better than top-down deletion by the site owner, even if the latter may sometimes be the only way to keep the conversation from going off the rails.

Constructive Anonymity vs. Drive-by Anonymity

Another place where we clearly erred in the first draft is in the suggestion that anonymity should be forbidden, as there are most certainly contexts where anonymity is incredibly valuable. (Some that come to mind include whistleblowing, political dissent, or even general discussion where someone might not want to confuse their personal opinions of those of an organization to which they belong. As one commenter remarked, it might even be useful for a shy person to whom anonymity gives a bit of courage.)

That being said, there is a strong connection between “drive-by anonymity” and lack of civility. Jaron Lanier just sent me a pointer to a thoughtful article he wrote for Discover Magazine in March, shortly before this controversy erupted:

People who can spontaneously invent a pseudonym in order to post a comment on a blog or on YouTube are often remarkably mean. Buyers and sellers on eBay are usually civil, despite occasional annoyances like fraud. Based on those data you could propose that transient anonymity coupled with a lack of consequences is what brings out online idiocy. With more data, the hypothesis can be refined. Participants in Second Life (a virtual online world) are not as mean to each other as people posting comments to Slashdot (a popular technology news site) or engaging in edit wars on Wikipedia, even though all use persistent pseudonyms. I think the difference is that on Second Life the pseudonymous personality itself is highly valuable and requires a lot of work to create. So a better portrait of the culprit is effortless, ­consequence-free, transient anonymity in the service of a goal, like promoting a point of view, that stands entirely apart from one’s identity or personality. Call it drive-by anonymity.

Anonymity certainly has a place, but that place needs to be designed carefully. Voting and peer review are pre-Internet examples of beneficial anonymity. Sometimes it is desirable for people to be free of fear of reprisal or stigma in order to invoke honest opinions. But, as I have argued (in my November 2006 column), anonymous groups of people should be given only specific questions to answer, questions no more complicated than voting yes or no or setting a price for a product. To have a substantial exchange, you need to be fully present. That is why facing one’s accuser is a fundamental right of the accused.

Furthermore, sites make traffic tradeoffs when requiring registration versus the additional flow they get from not requiring it. And of course, on the net, identity is very easy to spoof, so even if an email address or other form of identification is required, it doesn’t mean that there’s a real or easily traceable person on the other side.

However, sites that have problems with vandals disrupting their online discussions may prefer to make the choice to require proof of identity in exchange for participation rather than shutting down comments entirely.

There are some nuanced legal issues to be looked at

Jeff Jarvis makes the claim that the code of conduct I’ve proposed “threatens to give back the incredible gift of freedom given us in Section 230.” He points to the EFF page explaining section 230 and says “Go read about that,” but he didn’t follow his own advice, since the page says, among other things: “Courts have held that Section 230 prevents you from being held liable even if you exercise the usual prerogative of publishers to edit the material you publish. You may also delete entire posts.”

That being said, I can see that when I converted the wording of my original exhortation to “take responsibility not just for your own words, but for the comments you allow on your blog” into the statement that begins “We take responsibility…” I might well be proposing something that would weaken legal protections.

(A reminder about the context of the original statement. It was inspired by Chris’ Locke’s assertion regarding the threatening images of Kathy Sierra that had appeared on his site that he wasn’t responsible for what anyone else said or did on the site. That seemed to me to be an abdication of responsibility.)

A site owner obviously doesn’t want to take legal liability for the actions of commenters on their site. But at the same time, it seems to me that we need to eschew the idea that we bear no responsibility for the tone that we allow on our site. A site owner does have the ability to delete inappropriate comments, to ban IP addresses, and to impose moderation systems or shut down comments entirely if the greifers get out of hand.

Still, the legal implications do need some attention. A lawyer of my acquaintance wrote in email:

Under US law, there’s potentially an overlap/conflict between some aspects of the proposed code and existing legal protections for ISPs, bloggers, and others who provide forums for user-generated content. It’s worth thinking about how to take those protections into account in discussing the code. Issues include:

  • how to avoid losing or weakening legal protections against liability for infringement (and even defamation, in some circumstances) that now exist for ISPs, bloggers, and others, and that are partly based on the assumption that posted content is not being monitored
  • coordinating the code with existing legal tools–such as the DMCA take-down procedure under Section 512–that benefit people who provide forums for user-created content

  • avoiding situations that force people into making legal judgments in public about [issues] that they really aren’t prepared to make, or that force them into appearing to have made legal judgements (e.g., explaining that they’ve removed a post because it’s
    infringing or libelous, when it’s really not)

Also, outside the US, things are different.

If it hasn’t happened already, it might be worth convening a small group of congenial and sensible lawyers to talk about it.

In short, there’s some thinking to be done here, but it’s better done by real lawyers rather than the all-too-common would-be lawyers of the net.

There’s a lot of strong feeling on the subject, but civility still matters

A number of posters are obviously not familiar with Godwin’s Law, and in particular, the idea that (per Wikipedia), “There is a tradition in many newsgroups and other Internet discussion forums that once such a comparison is made, the thread is finished and whoever mentioned the Nazis has automatically ‘lost’ whatever debate was in progress.” Even apart from that strike against their argument, those commenters who equate the idea of a code of conduct with censorship seem to me to fail to understand what I proposed: not some kind of binding code that bloggers would somehow be required to follow, but a mechanism for bloggers to express their policies.

That being said, I am trying to encourage a kind of social self-examination on the part of the blogging community. Many people have written to say that they have no compunctions about deleting unpleasant comments. But I believe that there’s a strong undercurrent on the internet that says that anything goes, and any restriction on speech is unacceptable. A lot of people feel intimidated by those who attack them as against free speech if they try to limit unpleasantness. If there’s one thing I’d love to come out of this discussion, it’s a greater commitment on the part of bloggers (and people who run other types of forums) not to tolerate behavior on the internet that they wouldn’t tolerate in the physical world. It’s ridiculous to accept on a blog or in a forum speech that would be seen as hooliganism or delinquency if practiced in a public space.

I’m not a big fan of political correctness. I love intense, passionate discussion. I believe that there’s a lot of great discussion in the comments on this topic, even when the people concerned are disagreeing with me. But I’ve taken a stronger stand myself as a result of this discussion in saying, “if there’s no substance to the comment, just insults, I’m not going to give it space.” If more people feel empowered to make that decision about their commenters, that’s not a bad thing.

I challenge anyone who reads the comments on the two entries about the Code of Conduct that are linked to at the start of this entry to tell me that I’m suppressing discussion just because I deleted a couple of comments by potty-mouthed kids who didn’t have anything to say but epithets.

It concerns me that Kathy Sierra, whose bad experience triggered this discussion, thinks that a code of conduct such as I proposed would do no good. (She points out that the threatening comments about her are not on sites that she controls.) But I believe that civility is catching, and so is uncivility. If it’s tolerated, it gets worse. There is no one blogging community, just like there is no one community in a big city. But as Sara Winge, our VP of Corporate Communications pointed out, it’s not an accident that “Civil” is also the first two syllables of “civilization.”

What’s more, when an exchange of ideas turns into an exchange of insults, everyone loses. As Colin Rule wrote in a post entitled The role of manners in a divided society:

So is it true that civility and politeness should go out the window when confronted with deep and intense feelings? Well, not to sound too much like “Mr. Manners,” but I think it’s at that point that civility and politeness come to matter more. When emotions get the better of someone, and that person uses language intended to incite and shock rather than reason, it creates an easy target for the other side; the most likely response becomes a similar provocative statement, and then the exchange becomes focused on the excesses of each statement rather than the issues at hand….

This dialogue gets us nowhere. It makes it easy to dismiss the other side as foolish, nonsensical, and incapable of rational dialogue. This, in turn, worsens the disagreement and encourages further extremism. The only way out of this situation is for reasoned individuals to say enough is enough, and to rebuild a moderate majority who insist upon civil, polite dialogue.

Colin has neatly summarized what I hoped to accomplish with my call for a code of conduct. The mechanisms I proposed may not be the right ones, but I am convinced that the goal is worthwhile. Let’s figure out the right way to reach it.

tags: