Documentation as Testing

Can explanation contribute to technology creation?

“If you’re explaining, you’re losing.”

That gem of political wisdom has always been hard for me to take, as, after all, I make my living at explaining technology. I don’t feel like I’m losing. And yet…

It rings true. It’s not that programs and devices shouldn’t need documentation, but rather that documentation is an opportunity to find out just how complex a tool is. The problem is less that documentation writers are losing when they’re explaining, and more that creators of software and devices are losing when they have to settle for “fix in documentation.”

I was delighted last week to hear from Doug Schepers of webplatform.org that they want to “tighten the feedback loop between specification and documentation to make the specifications better.” Documentation means that someone has read and attempted to explain the specification to a broader audience, and the broader audience can then try things out and add their own comments. Writing documentation with that as an explicit goal is a much happier approach than the usual perils of documentation writers, trapped explaining unfixable tools whose creators apparently never gave much thought to explaining them.

It’s not just WebPlatform.org. I’ve praised the Elixir community for similar willingness to listen when people writing documentation (internal or external) report difficulties. When something is hard to explain, there’s usually some elegance missing. Developers writing their own documentation sometimes find it, but it can be easier to see the seams when you aren’t the one creating them.

Is there a way to formalize this? Writing documentation is not – despite javadoc and similar systems – easily automated. Developers aren’t surprised when their tests take time to run, but usually expect them to run in computer time, not human time. Even the tighter loop that WebPlatform.org is creating runs over weeks or months, though it’s effectively a test of specifications, not code.

Donald Knuth’s Literate Programming seemed like the ultimate way to tighten that loop:

Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style.

Much as it pains me to doubt Knuth, two problems remain. The first is the level of explanation: this is still explaining what the program should do, not how humans should interact with the program. The second is the question of who does the documentation: the testing aspect tends to produce fewer false moments of happiness when someone other than the creator of a project is the one doing the documentation. Knuth notes that “WEB seems to be specifically for the peculiar breed of people who are called computer scientists,” and the boundaries between experts and more casual users are often what documentation testing can show most vividly.

I love, though, that Knuth considers it a moral question: “I am imposing a moral commitment on everyone who hears the term; surely nobody wants to admit writing an illiterate program.”

Can we get there? Can we make “documented technology” a more powerful phrase than “literate program”? I don’t have an organization available for my own experiments, but I’d love to hear if you have experience with this. Did it improve code or products? Or did it just make developers chafe?

tags: , ,

Get the O’Reilly Web Platform Newsletter

Stay informed. Receive weekly insight from industry insiders—plus exclusive content and offers.