Feb 12

Marc Hedlund

Marc Hedlund

Three interesting "Web Development 2.0" responses

Aside from everyone giving me hell for using "2.0" in the title, the most frequent response to my Web Development 2.0 post has been how reckless it is to ignore QA as a discipline. The most thoughtful of these responses came from Jonathan Alexander, in his post, Does QA Matter? My last job, like Jonathan's current job, was in a security software company, so I'm very sympathetic to his point of view. (In case it wasn't clear, my post was a catalog of observations, not necessarily recommendations, though I am, as I said, impressed with the results I've seen from these practices.)

Antonio Rodriguez has an interesting and, I think, compelling argument for another way of considering this question. He writes, in his post, Transparent Commodity Infrastructure and Web 2.0:

[T]he biggest contributor to this brave new way of doing things in my mind is transparency of commodity infrastructure.
Let me use an example: back in 1998 if you were building a web-based startup, you were probably running on Solaris/SPARC and using an Oracle database. You were also likely to be running on some sort of a Java servlet engine (though there were exceptions, this was again the leading edge). This huge apparatus usually required at least 1 of the following: DBA, sys-admin, release manager, and build manager-- nevermind all of the consultants and vendor people that it took to solve problems that arose from trying to get everything working together.
Fast forward to 2005. Anyone still using Solaris/SPARC for web apps is either a moron or a depressed Sun shareholder. MySQL and Postgres are now considered "enterprise-grade," and if you should be so masochistic as to still want to do Java development on the app-tier, you've got Tomcat, Jetty, and even JBOSS available to you on your platform of choice.
Now here is the key: every piece of infrastructure is free, has a thriving online community that can help you with issues better than the vendors ever did, and perhaps most importantly, can scale down to run on almost any type of laptop.
This last piece is what I found was missing from Marc's post: the fact that in the Brave New World, every developer can get to have the entire stack on his own machine. The value of this should not be underestimated.
Anyone who doesn't take advantage of it now is doomed to slow release cycles, the need for a full QA team on hand, and a staff-imposed level of overhead that is tough to build a business out from under.
What a great Brave New World.

I'd bet this is how these development teams would respond to Jonathan: not that QA has disappeared, but that it is part of the developer's role. (I've seen the same shift with system administration, in some cases. I left that bullet off of the original post since I haven't seen that as often, and wasn't sure if it merited inclusion.) Perrin Harkins argues in the comments to the original post that automated testing is the most important change in recent quality methodology, and he's certainly right. If developers are taking on QA roles, it's very often in this form, writing automated systems to find bugs rather than relying on QA testers to "clean up after them." The QA process has become part of the stack on Antonio's laptop, in the form of automated tests.

Part of Jonathan's point still stands, though: "if you have security problems in your software, you're better off catching them before your users." Of course he's right, especially under some measure of severity. What I hear, and like, from developers in this set, though, is that you can very easily get caught up fixing a bunch of bugs (or adding a bunch of features) that no one will ever hit (or use). Where to draw the line? The same sort of misplaced energy that causes unused feature development can easily happen within the automated tests themselves -- a ton of test code that effectively just ensures a string return value in a typesafe language really is a string when it returns; what a waste.

I'm with Jonathan that user bug discovery is not enough on its own, and that the "extreme-squared" view I was hearing ("If a customer doesn't see a problem, who am I to say the problem needs to be fixed?") goes too far. But I'm also in favor of figuring out, by measurement when possible, where and why the line should be drawn between a bug that matters and a real bug that isn't a priority. Developers -- who answer support email, who concretely measure user activity as a primary job function, who are committed to sampling and experimenting -- may well be the best-suited to answer those questions.

tags:   | comments: 4   | Sphere It

Previous  |  Next

0 TrackBacks

TrackBack URL for this entry:

Comments: 4

  Jon [02.15.06 12:42 PM]

Sure QA matters, but it's boring as hell. QA is like doing the laundry, you really don't want to do it, but it's better than sharing an unpleasant odor with the rest of the world.

  Adam [03.01.06 05:05 AM]

I think the nature and role of QA in the Web 2.0 world is still evolving.

As someone who does QA for a living, I am firmly in the camp of those who believe developers should not be the only ones in an organization testing. That said, some testing activities, namely unit tests, are and should be pushed onto the developers. The quality of code that comes into qa in an organization which has a culture of unit testing is significantly higher than one that does not.

As for the face of QA in Web 2.0, I envision it as such.
- Smaller, agile (for lack of a better word) teams of highly technical people who can rapidly automate large portions of the functional testing (via Python or Ruby)
- Staffed by people who understand not only testing, but understand the process of software development and who can help streamline/adjust how the software is produced. The better the process behind a product/service, the better the resulting product/service will be (garbage in, garbage out). This does not mean that every Web 2.0 QA team should be gunning for CMMi certification, but having at least one person who is knowledgable about such things is going to greatly help.


  David [05.03.07 08:00 AM]

To draw an analogue, I'll use home construction as a representation of a software program. We can extend this to support "software as a service" by assuming we rent out the home as opposed to selling it.

When we build our house, we want to ensure that a certain level of quality is met for safety and security (if nothing else.) However, no house is without defects so we have to balance the "ceiling fan does not work in bedroom 3" with "there is a leak in the roof that is causing water damage which will eventually collapse the ceiling in bedroom 3!"

If we live in the house, we eventually discover many if not all the defects. Similarly a developer and/or rigorous QA team may identify most of, but not all the software defects.

Linking back to the points in the article, we therefore need to think about the most optimal way to find the things that affect the "safety and security" of the program/service/application - the leaky roof in the analogue - and ignore or filter or de-prioritize the less significant defects which either may never be known or may never bother anyone to the point is worth looking for or fixing i.e., the inoperable ceiling fan.

Unit testing may provide some first level of assurance but one has to take other approaches to ensure the overall level of quality is created or maintained. Perhaps a lightweight system-wide QA is the answer or it's focusing on the touch points of the software units and modules.

  OpenSocial Factory [11.19.07 09:10 PM]

OpenSocial Factory - Where social networking applicatios come to life!

Are you in need of a custom Facebook application or an OpenSocial application for social network? We are happy to build even the most robust applications to help promote your company in the realm of social media across any social network.

Visit now: or

Post A Comment:

 (please be patient, comments may take awhile to post)

Type the characters you see in the picture above.