Clay Shirky just gave a fantastic talk at ETech about the patterns behind moderation systems which went beyond that. Best talk so far for me. Added 10 Mar 2006: just to clarify–I was typing as Clay spoke, so these notes are a mixture of his words and my paraphrases. So before taking umbrage or lauding Clay as a grand master of the English tongue, you should check with Clay to ensure he actually said what I’ve written.
Clay Shirky at ETech, 8 March 2006
It was interesting to hear Jon talk about patterns, because that what’s I want to talk about. I want to propose a pattern language for moderation systems, talk a little bit about why I think we need such things.
Pattern language is a tool adopted from architecture for speaking in a moderate amount of detail. A description of problem and oslution combination or goal and strategy combination that’s detailed enough that you can see hwo to build it but not so detailed that it can’t be applied to many different domains.
Moderation strategies is a problem often appearing in developers. Don’t have a way to talk about issues or strategies.
Imagine a measurement you call communal freedom–how much freedom does the software allow for the users to communicate with each other as a group. How much does sw catalyse communication.
Axis. X axis is freedom. Left edge is Notepad–no matter how many users, never going to catalyse conversations. Right edge is Usenet–anyone with email address or web access can start or contribute to conversation. Y axis is annoyingness. Trolling, cascades, etc are on the Y axis. Here’s the problem: you’d like to be able to launch applications that have a moderate amount of communicative freedom, moderate conversations with one anothere, in response for moderate problems.
But once you give a social tool to people, you get all the social problems.
You live in a world where you have to mitigate those effects. Existing apps being made more social every day, so we need a way to talk about the issue. Imagine you’re a developer facing this problem, launching an app with social ramifications. Want to mitigate the annoyingness somehow and wanted to learn from what had been done already.
Here’s the difficulty we face today: if you go out and look at the apps currently running, you’re either operating at a high level of detail or at lines of code. Let me illustrate this with Slashdot. Sees hundreds of thousands per day, every story can be commented on, but over years it has done a good job at maintaining homeostasis and not being overwhelmed.
How does Slashdot do it? Members defend readers from writers. A small number of users care to an unusually high degree, and they form a membrane between readers and writers. But understanding the basic trick isn’t enough to tell you how to design the system.
So go to a given comment page. Every comment rated on 7 point scale, and members who mdoerate the system effectively rank all the posts. The reader who isn’t logged in or hasn’t changed defaults, never sees posts rated 0 or -1. That’s over 20% of the comments. Significant amount of filtering, a socially involved system.
How does Slashdot get this done? [flowchart of questions Slashdot wants to know when you click the comment button] Four decisions built on top of a single click. If the answer to all questions is yes, you can rate, but if any is ‘no’ then you’re reading but not moderating.
If you’re designing a new social app, figuring out what you can learn from this, you’re involving four suybsystems on a single click. One solution we thought worked for a while, was “take the code and port it”. Here are all the sites implementing the slashcode base (empty page). Taking the slashcode software in order to learn from Slashdot has turned out to be a really lousy way to work.
If neither the gestalt (defending readers from wirters) nor the codebase adds value telling us how to look at it, what adds value? I’m proposing a pattern language.
The problem that Slashdot faces is the tragedy of the commons. The aggregate attention of the users is the commons. Everyone has an incentive to see that maintained for Slashdot to thrive. Every poster has an incentive to defect, to get attention for themselves. So here are how Slashdot attempts to maintain the tragedy fo the commons.
1. Move comments to a separate page. Subdivides the total audience by area of interest. Reduces the perimeter they need to defend. Sends message that comments are ancillary to the posts themselves. Can ask yourself “do I need comments on same page as content”, and that design decision has ramifications.
2. Treat readers and writers different. By makign distinction, slashdot can figure out how to defend readers from writers. Implicit differences in userbase seen as whole.
3. Let Users Rate Post. Moderation system is how they engage the defences. There are patterns for the moderation system, that you can take and implement for yourself without having to take the whole codebase.
4. Defensive Defaults. Social software is the land of unintended consequences. Who will guard the guardians? Once one group of users passes judgement on others, I have to defend against abuse. Slashdot has a second set of systems to solve the “who will guard the guardians” problem. Judges can’t post, enlist committeed members, measure good behaviour, treat users and members separately. Easy to see which part of Slashdot solves the original problem (commons) and which part solves the problems that the commons solution requires.
One of the advantages of working in this way is that you can start to make comparisons between seemingly unrelated things. This is the Buffy fan size, Bronze Beta. It’s quite unlike Slashdot in many ways. At the top left there are three boxes: fill in your email, name (password optional), comment. When you post, comment goes into the conversational queue and appears at the top. Simplest group blog.
Easy to say they didn’t know what they were doing, but it’s beta because this is the second version of it. When WP sold Buffy to UPN, UPN said “we’re cancelling the bulletin board” and the users rallied and had a second community made for themselves and said to the programmers “don’t give it any features”. Threading, forking, user rating created complexity that they decided they didn’t want. Patterns:
1. Don’t make features
2. Make comments central
3. Make login optional
This is what happens when the community is central, rather than annotated and ancillary. Having the design conversation about which choice we want to make is the opportunity here.
Many new tools have group logins: if you know group URl and password.
We have put up a wiki and fleshing out a handful of patterns. I’ve always thought of ETech as the tribe, and where I first want to offer participation. http://social.itp.nyu.edu/shirky/wiki
Please come join us, help us edit/criticize/improve. Will help us learn from the past and design better for the future.
That’s the public official and short-term reason I think pattern language for moderation is important. Here’s the longer-term reason.
4. Hobbes and Rousseau argue about Dave Winer.
One fine day in April, Dave Winer said “I have grown tired of the fractiousness of this list, and I am turning it into a moderated list” unilaterally. This was not well received by members of the list. The interesting thing to me about the subsequent conversation was that it wasn’t about technology at all. Nobody disputed that Dave had the ability to do what he could do. He transferred the happy fun ball of the list into a star, where nobody could talk to anyone without going through Dave.
The conversation was an entirely normative conversation: what duty did he have to his users? And this is the longer-term more speculative reason I think this is important. Part of a longer-term and important conversation.
Thomas Hobbes in Leviathan advanced argumenet that humans in our native state lead lives of such chaos that we need a monarch to impose control and without it our lives would be nasty, brutish, and short. If from time to time tyranny happens, still better than the alternative.
Rousseau wrote later that might is not right. Leader must support subjects, and subjects have right to agitate against leader if they’re not being well served.
This is the direction that the conversation around social software is taking. Hobbes would say that Dave had the right and all was good. Rousseau would reply, “no he didn’t, software systems that don’t allow the users to fight back are immoral.”
Social software is the experimental wing of political philsophy, a discipline that doesn’t realize it has an experimental wing. We are literally encoding the principles of freedom of speech and freedom of expression in our tools. We need to have conversations about the explicit goals of what it is that we’re supporting and what we are trying to do, because that conversation matters. Because we have short-term goals and the cliff-face of annoyance comes in quickly when we let users talk to each other. But we also need to get it right in the long term because society needs us to get it right. I think having the language to talk about this is the right place to start.
Please come join us to edit, alter, delete, improve.