Developing an improved online environment for educating computer users

People who want to learn more about computer technology and solve
problems they encounter on their systems currently have a wealth of
forums to turn to: mailing lists and newsgroups, official and
unofficial documentation (which may be distributed on the Web, on
their systems, or in printed form) and the more collaborative media of
IRC channels, wikis, and virtual worlds.

Why tamper with this set of resources? Because they are not as easy to
find or to use as they should be. Each medium was invented for
purposes other than the specific task of educating computer users, and
have never been tailored to the tasks of generating and searching for
information about computer systems. If relevant material was served
through more specialized and helpful tools, people might create better
information and it might be used more.

In a recent
blog on the Radar site,
I suggested two tools that could help improve the quality and
findability of information. In this blog I’ll expand my view a bit and
suggest a whole new information-sharing environment.

Let’s fantasize about a new system that combines the best features of
a wiki, a FAQ, a mailing list, an interactive tutorial, and
stand-alone documents in order to provide special features that
enhance users’ ability to find answers to their questions.

Start with a wiki organized hierarchically by topic and containing a
series of FAQs. As with a newsgroup or mailing list, anyone could add
a question to the FAQ. Others could then fill in answers using the
wiki software.

Questions often duplicate earlier questions. For instance, suppose
someone asks how to add an icon to a desktop. Someone later asks how
to start an application from the desktop, without realizing that this
is essentially the same question as the one asked earlier. The tool
should make it easy to combine the two questions, and to subdivide a
question into constituent parts.

Each question can have a unique ID, so that answers can be linked and
viewed together. The ID also makes it easier for questions to refer to
other questions, which is often necessary because users need to put
more basic changes in place before fixing the symptom that led them to
the site.

For instance, the answer to “How do I filter out redirect requests on
my router?” might be start, “First, refer to the question on how to
filter ICMP traffic,” which in turn might start, “First, refer to the
question on how to set up a new filter chain.” If a question is a
common one, obviously, it would be someone’s time to string these
answers together and integrate them into a single coherent document.

The tool should be organized as a set of APIs that could be embedded
in other tools. Users would have access to all its features through
any entry point of their choice: from a web browser, from a
stand-alone program on a desktop system, from any IDE whose developers
chose to provide an interface, even from Emacs. It should also be easy
to print sets of content; sites could then offer to sell
print-on-demand copies.

Now imagine an API that made it easy to offer specialized windows with
different purposes related to community education. Often, a person
posting a question has several parts to the question: the text of the
question, some sample source code or a part of a configuration file,
output or error messages, and so on. It should be easy to put these in
separate windows that were linked so that the question could be viewed
as a whole, but with easy comparisons. For instance, particularly
lines of code or configuration directives cause certain output to
appear; links between these should be easy to draw. Links to bug
reports should also be easy to insert.

Authors who notice widespread interest in a certain topic can extract
the answers and make stand-alone web pages that are maintained within
the wiki.

Using one of the tools I have
already proposed,
users can suggest pathways through the documentation–this is,
different documents that people with particular needs can read in
order. For instance, users could be advised to read a background
document or guide to configuring the system before tackling one of the
utilities that depend on this background and configuration.

The “push” feature that many people like about email is reproduced by
wiki watchlists; users should be able to learn within minutes of
changes via email, IRC, or RSS. A source control system could preserve
all versions of the wiki material for historical purposes.

We should aim this tool at developers as well as users, particularly
because users sometimes turn into developers. They may submit bug
reports, contribute to the core technology, or create new tools at a
higher layer. Databases of bug reports, bug fixes, and feature
requests could be linked in. When someone asks “Does the program do
XYZ?” it could either be answered by a “Yes, here’s how” or be
reclassified as a feature request. A question that can be solved
through a cumbersome procedure could also generate a feature request.

Wikis tend to disguise the contributors. But one of the most powerful
motivations for contributing documentation is recognition. People want
to be known for their documentation so they can compete better for
jobs and contracts, gain readers’ trust for their pronouncements, and
just enjoy basking in praise. So it would be worthwhile to try to
label a document with metainformation indicating the percentage
contributed by each author. This is hard to automate, unfortunately.
Limited recognition should be given for changes that look extensive
but come down just to fixing grammar and misspelled words. And it’s
complicated to determine how much credit to give someone who restores
information that was removed or incorrectly edited by someone else.

To summarize, a community education environment should include:

  • Easy access for adding questions and editing both questions and

  • A suitable division of material into different types

  • Extensive linking that not only helps people find information, but
    shows them a variety of pathways through related documents

  • Support for combining questions, dividing questions into subquestions,
    and extracting material to make stand-alone pages

  • Recognition of authorship

  • An API that can be incorporated into tools of the users’ choice

  • Push technology for people who want it

  • Source control

  • Integration with bug reports and feature requests

  • Print-on-demand

Done well, this system would make it fun and rewarding to
contribute information to user communities. People searching for
answers would be more likely to find and understand them, reducing the
need for the time-consuming hand-holding that takes place on
forums. When novices do need help, they’d encounter a structured site
that’s well suited to submitting their questions, and the answers
could quickly be generalized into resources of value to others.

In fields as fast-moving as modern computing environments, we need
tools like these in order to free up developers for the work of
development. The result will also enhance a key goal of all projects:
to recruit new users and make them comfortable using the system.

  • I agree with nearly all the points you make, but I’ve got one question: how do you drive adoption (specifically, how do you change peoples’ behaviors to leverage/contribute to these resources?)? We’ve tried something similar within our organization and have found adoption to be low in terms of meaningful usage. I don’t think the issue is related to age (though we definitely see differing patterns based on the age of the participants), but rather some underlying cause that I can’t sort out. I see this in other teams as well, with differing structures but similar rates of adoption. Thoughts?

  • Done well, this system would make it fun and rewarding to contribute information to user communities.

    Ok, I’ll bite: where is the “reward” and where is the “fun”?

    Organizing work-flows is a fun problem and it seems like you have an imaginary work-flow in mind and want to make it more efficient. But, the work-flow you seem to be describing relies heavily on volunteerism. What’s the cause?

    Mind you: the form and function of wikis etc. as they stand today is largely an artifact of what certain frameworks happen to make “easy”. Better foundations and/or added effort can probably discover some radical sweet spots and your speculations here are potentially part of a process to systematically feel out where those new sweet spots might be. But the money and work-flow questions matter.


  • Response to Thomas Lord:

    Maybe I shouldn’t have said this system would be fun to use. Fun is in the mind of the beholder. Let’s start with “rewarding.”

    I think the following features would make contributors feel rewarded:

    There postings would bet wider dissemination and be found by more people.

    The postings would be more likely to be edited, improved, and updated (rather than simply forgotten as people ask the same questions over and over, and as the software evolves).

    Even a modest recognition/reputation system would give them accomplishments to point to.

    It’s hard to say it will be fun, but apparently a lot of people already feel that answering questions online is fun. I’m not just guessing; I saw it while researching “Why Do People Write Free Documentation? Results of a Survey” (
    . I can’t claim my proposed system would add more fun, but it would reduce a lot of annoying barriers that allow people to focus more on the parts that are fun: especially letting people access the system from any supported tool/environment.

    I decided that money was such a difficult issue (and I’ve dealt with it in other postings) that I left it out of this proposal–the proposal is plenty big already.

    Thanks a lot for calling for clarifications.

  • barry.b

    one of the biggest problems with blogs, Wiki’s, etc (and this even applies to Wikipedia) is legitimacy.

    is the “knowledge of the herd” authoritative? can we trust it? remember, once upon a time people thought the world was flat. Even Charles Darwin’s theories on evolution are still viewed with skepticism.

    if someone contributes to the “community knowledge” how do we know if they have the experience to do so?

    and just as importantly, how do we know if they are trying to influence the discussion (bias, commercial interest, etc – things that should require full disclosure)

    In wikipedia, I’ve got into some heavy disagreements occasionally on certain history points of clarification. To be perfectly honest, both sides were made up of lay people with no expert qualifications – I wouldn’t trust either argument: theirs – or – mine!

    I tell you what …. why don’t we do the same in political selection and _vote_ for the information to include/adopt/save?

    @Andy, no disrespect at all to you sir, and I say this only to make a point … from what basis of legitimacy can you put forth these arguments on educating computer users?

    is it just because you have an opinion and the medium to get it out there?

  • Andy,

    Thanks, as always, and some replies:

    You’re citing an “attitudinal survey” and not one that is particularly sophisticated from a sociological perspective. In other words, that’s a marketing survey that attempts to discover the reasons people would give — or in this case would simply agree with — for their behavior. Usually such surveys are used in marketing to parrot those reasons back in promotional efforts. The idea of parroting back is that, at the time when some people would give or agree with those reasons, perhaps there are others who can be drawn in by using those reasons as the basis of advertising.

    Ok, well, there are (in one way of looking at it) at least two major directions that public criticism of such surveys can take. Really these are both intertwingled but for expository purposes we can begin by keeping them separate.

    One angle of critique here is about the business case. Attitudinal surveys, unless they are also longitudinal, don’t tell us anything about permanence of the attitudes measured. One year Bowie is a rocker. A few years later he’s a nostalgia act. A few years later is on the progressive edge. A few years later he’s a rocker again. Or a nostalgia rocker. Or a rocking progressive. Or… The point of this kind of criticism is to point out, from the business perspective, that in surveying the attitudes of contributors about why they contribute you tell us nothing about why they might want to contribute tomorrow. You haven’t identified, with a survey like that, any sociological or economic force that makes the phenomenon of contribution repeatable. You do with such surveys identify language and concepts that can be used in the contemporary environment of the survey to better market what contributors are already doing. One could use the results of such studies to make ads that would stand a plausible chance of (for the moment) drawing in a greater number of contributors. But you haven’t identified any solid reason to invest in developing next generation infrastructure for contributors because you haven’t shown they’ll exist. (The sociological studies I’ve seen suggest that they will not exist, except around a very small number of projects — but I’m not trying to play “my favorite study” vs. “your favorite” study so I mention this only in passing.)

    That’s the business case which, for purposes of exposition, we’re separating from what I guess I would call the “truth case” — the case from the point of view of ordinary human meaning. For example, you casually report that on the basis of your studies you think a “reputation system” would help reward contributors. There’s an interesting slight of hand in your vocabulary there. 30 years ago, if someone asked where to gain deep insight into “reputation” the answer might be “hmm… perhaps in literature”. The reputation-based dynamics of a cocktail party of random acquaintances might be the purview of sociology, sure. Reputation as a part of life — say on the scale of a career — hard and fast rules are hard to find. And, in the place of hard and fast rules, literature represents some of the best thinking. To pick an example that has some currency, perhaps Austin could teach something or two or three about what “reputation” really means on the human scale. Meanwhile, “reputation system” is a technical term – a term of art – describing a particular kind of database column. To avoid conflating the two concepts of “reputation” is a simple matter between you, Andy, and me, Tom. Neither of us confused when pushed to make the distinction. Yet, when we accept your usage, here — you seem to embrace the conflation and invite the confusion. The critique I’m describing here would continue by asking about the consequences of the conflation and of promoting the confusion — hold these up against awareness — and begin to ponder questions of power and ethics and morality.

    For a neat full circle effect one could then wonder how insight into the — let’s say: philosophical — confusion over something like reputation …… one could then wonder how that insight might be used in marketing, as a counter to the echo-chamber, parroting effects of loose attitudinal surveys.

    One might also just cut through all of that and say that it’s obnoxious to ask people to do work for free for the commercial enrichment of others. On its face. Volunteerism is for the commons, not the corporate asset. Volunteerism is for freedom, not for cost reduction. Volunteerism is an individual act to “do good” not a “suck-up” to have the better resume in the standards of the day.

    Missing in your technological daydreaming is any thought at all about how to build the technology in such a way as to meter contributions and pay for them. Why is that, do you suppose?


  • Re ‘reputation’, ‘Rewards and Recognition’. IMHO recognition is valid, though a very long term goal.

    Re ‘reorganizing’ existant content. That seldom pleases more than the person doing the re-org. Write some documentation a hundred ways and someone will ask for way 101. Alternate access is valid though.

    As to an implementation. A mix of docbook would provide a good master, an entry system could be developed using HTTP PUT to some server, keeping order (and removing spam) via the CVS system. Input could be a qna, a section etc. Any reasonable wrapper level. Easy to validate and process.

    As with wikipedia, you’d need the overlords to do the editing and restoration post site-abuse.

    Finally, as for developers being documenters? Rare as rocking horse sh… in my view. Good developers seem to have a fundemental objection to documenting their work.

    The system seems quite workable to me, though it will have to overcome the usual people issues.

  • So, Dave, you want “reputation” mediated by database columns combined with “overlords”?

    That position is good resume suck in some circles!

    Workable? You betcha. Acceptable? Matter of opinion. Capable of defeat? Let’s find out.


  • jillcatrina

    Anecdotal evidence suggests that instructor performance in the online discussion portion. If this is the case, faculty development is an important component of success. The resulting data show that there is considerable variation in faculty teaching styles, interaction, and the amount of content-related feedback.