Three interesting "Web Development 2.0" responses

Aside from everyone giving me hell for using “2.0” in the title, the most frequent response to my Web Development 2.0 post has been how reckless it is to ignore QA as a discipline. The most thoughtful of these responses came from Jonathan Alexander, in his post, Does QA Matter? My last job, like Jonathan’s current job, was in a security software company, so I’m very sympathetic to his point of view. (In case it wasn’t clear, my post was a catalog of observations, not necessarily recommendations, though I am, as I said, impressed with the results I’ve seen from these practices.)

Antonio Rodriguez has an interesting and, I think, compelling argument for another way of considering this question. He writes, in his post, Transparent Commodity Infrastructure and Web 2.0:

[T]he biggest contributor to this brave new way of doing things in my mind is transparency of commodity infrastructure.

Let me use an example: back in 1998 if you were building a web-based startup, you were probably running on Solaris/SPARC and using an Oracle database. You were also likely to be running on some sort of a Java servlet engine (though there were exceptions, this was again the leading edge). This huge apparatus usually required at least 1 of the following: DBA, sys-admin, release manager, and build manager– nevermind all of the consultants and vendor people that it took to solve problems that arose from trying to get everything working together.

Fast forward to 2005. Anyone still using Solaris/SPARC for web apps is either a moron or a depressed Sun shareholder. MySQL and Postgres are now considered “enterprise-grade,” and if you should be so masochistic as to still want to do Java development on the app-tier, you’ve got Tomcat, Jetty, and even JBOSS available to you on your platform of choice.

Now here is the key: every piece of infrastructure is free, has a thriving online community that can help you with issues better than the vendors ever did, and perhaps most importantly, can scale down to run on almost any type of laptop.

This last piece is what I found was missing from Marc’s post: the fact that in the Brave New World, every developer can get to have the entire stack on his own machine. The value of this should not be underestimated.

[…]

Anyone who doesn’t take advantage of it now is doomed to slow release cycles, the need for a full QA team on hand, and a staff-imposed level of overhead that is tough to build a business out from under.

What a great Brave New World.

I’d bet this is how these development teams would respond to Jonathan: not that QA has disappeared, but that it is part of the developer’s role. (I’ve seen the same shift with system administration, in some cases. I left that bullet off of the original post since I haven’t seen that as often, and wasn’t sure if it merited inclusion.) Perrin Harkins argues in the comments to the original post that automated testing is the most important change in recent quality methodology, and he’s certainly right. If developers are taking on QA roles, it’s very often in this form, writing automated systems to find bugs rather than relying on QA testers to “clean up after them.” The QA process has become part of the stack on Antonio’s laptop, in the form of automated tests.

Part of Jonathan’s point still stands, though: “if you have security problems in your software, you’re better off catching them before your users.” Of course he’s right, especially under some measure of severity. What I hear, and like, from developers in this set, though, is that you can very easily get caught up fixing a bunch of bugs (or adding a bunch of features) that no one will ever hit (or use). Where to draw the line? The same sort of misplaced energy that causes unused feature development can easily happen within the automated tests themselves — a ton of test code that effectively just ensures a string return value in a typesafe language really is a string when it returns; what a waste.

I’m with Jonathan that user bug discovery is not enough on its own, and that the “extreme-squared” view I was hearing (“If a customer doesn’t see a problem, who am I to say the problem needs to be fixed?”) goes too far. But I’m also in favor of figuring out, by measurement when possible, where and why the line should be drawn between a bug that matters and a real bug that isn’t a priority. Developers — who answer support email, who concretely measure user activity as a primary job function, who are committed to sampling and experimenting — may well be the best-suited to answer those questions.