I gave a rushed and somewhat incoherent talk at the Y Combinator Winter Founder’s Program last night (and let me say again — holy cow, they have such great taste in people — the companies they’re funding are filled with people I’d love to work with). In it, I reprised a part of the talk I gave in Israel, cataloging some of the software development practices I keep running into at Web 2.0 startups. The interesting thing to me is that so many startups and companies seem to be developing a new set of software development practices independently, and only after the fact sharing them with each other. Software isn’t written for Web 2.0 companies the way it was during the bubble, nor is it written the way traditional, shipped software was. New ideas about Web applications seem to necessitate new ways of making those applications. Below is my current catalog of observations (less rushed than last night, and hopefully more coherent); I’d love to have more such practices suggested in the comments. How do you make a Web 2.0 app, and what kind of person is great at it?
- The shadow app: I often hear developers say that their job is not to develop one application, but instead to develop two apps — the public-facing application, and the private application, the “shadow app,” which helps the company understand how the first application is working. Of course, statistics packages and traffic monitors are as old as the web, but these companies are explicitly rejecting any standard, pre-packaged code for this purpose, and are instead asking the questions they need for their specific businesses. One example: Flickr had a report of users with no contacts in the Flickr social network, which they called the “Loneliest Users” report. What a great report — a way to see who is uploading photos but not sharing them with anyone! With that, they could go add themselves as contacts for these “loneliest” users, and teach them how to use the contact feature. HitBox isn’t going to give you that report. The direct connection to your users provided by a server-hosted web app only gives you more data if you know what questions to ask, and building those questions is often just as important as building the public app iteself.
- Sampling and testing: With tens of thousands of site visitors a day, or many more than that, the entire structure of engineering discussions has shifted heavily into the realm of statistics and controlled experimentation. Is this feature a good idea? Let’s show it to 0.1% of our visitors today, and see how they react. Is option A better than option B? Let’s try them both with 10,000 users each, and see which one works better. Why argue when you can find out the right answer from the people who matter, your customers? Feature selection feedback loops used to take months or even longer — and usually, the arguments about the right decisions were made nearly in the dark. The best software organizations made decisions based only on what customers said they wanted, which is often much different from how those same customers really act when presented with a new feature. With live sampling and testing, developers can see how many clicks the new feature really got — the impulses of the animal mind — not just how many people surveryed responded to the idea of a possible feature — the conversational bias. Some developers have complained about lost work from this approach — implementing the same feature four or five different times before it makes it live to the site. But most of them will admit that the feature is better in the end. Usability testing with a video camera over the shoulder and a one-way mirror in the room is giving way to usability testing through data analysis from a real, but small, deployment. None of the companies in this set have done over-the-shoulder testing at all, except for informal hallway tests.
- Build on your own API: Of course many web app startups provide APIs, so external developers can build apps on top of their functionality and data. I was surprised to hear, though, how many of these companies build their own public-facing web sites second, by building on top of a web services API they develop first. The act of developing a public API, then, is not one of designing and testing various API calls — instead, all they have to decide is which of their existing method calls they want to expose to the public. They already know the methods work, because if they didn’t, the public web app wouldn’t work, either. In addition to “pre-testing” the API release, this also allows a very clean separation of responsibilities. One developer or set of developers works on the application’s “kernel,” exposed through the API; another works on the “view” the company exposes through its web site. In the operating system world, this is exactly the same as the separation between the Windows team and the Office team at Microsoft. It’s interesting, and very encouraging, though, to see the same model appearing at startup after startup.
- Ship timestamps, not versions: Gone are the days of 1.0, 1.1, and 1.3.17b6. They have been replaced by the ‘20060210-1808:32 push’. For nearly all of these companies, a version number above 1.0 just isn’t meaningful any more. If you are making revisions to your site and pushing them live, then doing it again a half hour later, what does a version number really mean? At several companies I’ve met, the developers were unsure how they would recreate the state of the application as it was a week ago — and they were unsure why that even matters. A version number of an application is very convenient if you and your customer need to agree on the bits you’re each examining while searching for a bug or suggesting a feature. But would you really roll del.icio.us back to the way it was when a bug report came in, just to verify the report? Traditional QA would tell you yes (but see the next bullet); these developers just can’t see the point. If the bug can’t be reproduced against the live site, then what the hell does it matter? Other developers have a somewhat more formalized process — one company I visited added a label to source control (in the format I used above) every time the application was pushed to the live site. How many labels are on the source tree? I asked. “About 3200.” For them, too, the version number is dead. (And yes, I certainly see the irony of the term ‘Web 2.0’ in this light…)
- Developers — and users — do the quality assurance: More and more startups seem to be explicitly opting out of formalized quality assurance (QA) practices and departments. Rather than developers getting a bundle of features to a completed and integrated point, and handing them off to another group professionally adept at breaking those features, each developer is assigned to maintain their own features and respond to bug reports from users or other developers or employees. More than half of the companies I’m thinking of were perfectly fine with nearly all of the bug reports coming from customers. “If a customer doesn’t see a problem, who am I to say the problem needs to be fixed?” one developer asked me. I responded, what if you see a problem that will lead to data corruption down the line? “Sure,” he said, “but that doesn’t happen. Either we get the report from a customer that data was lost, and we go get it off of a backup, or we don’t worry about it.” Some of these companies very likely are avoiding QA as a budget restraint measure — they may turn to formal QA as they get larger. Others, though, are assertively and philosophically opposed. If the developer has a safety net of QA, one manager said, they’ll be less cautious. Tell them that net is gone, he said, and you’ll focus their energies on doing the right thing from the start. Others have opted away from QA and towards very aggressive and automated unit testing — a sort of extreme-squared programming. But for all of them, the reports from customers matter more than anything an employee would ever find.
- Developers — and executives — do the support: This one came as no surprise to me — I’ve often found that the best way to motivate developers is to let them see just one flamemail deriding the bugs in their work. What was encouraging, though, was to see how deeply that has made it into the development culture in these companies. Of course, this is a savings measure, too — but many people now seem to have decided that the best way to focus the efforts of developers is to make them respond to complaints directly. One company I saw made developers write the first five responses on a problem, after which the responses were edited by a support staffer and added to the canned response set. More often, though, the developers and the CEO respond to the majority of the support email. One CEO told me he responds to about 80% of all the mail they receive. How better to know what people are saying about your product? he asked. That seems like an unusual case, but still more common than a company with a support staff, which I didn’t see at all.
- The eternal beta: This one is the most obvious, and the best-discussed. Following Google’s lead, many companies stick “beta” on their logos and leave it there for months or years. Gone are the betas that get released to a limited set of known but external testers, with formal product management follow-up interviews. The concept of “beta” as a time period or stage of development has fallen away, and been replaced with beta as a way of setting expectations, or excusing faults, about the current state of the application. (Not everyone agrees; Jason Fried and the crew at 37signals are noted contrarians.) I’d argue, though, that this is just the externally-visible artifact of all of the practices I’ve listed above. If you’re going to rely on customer reports for QA; if you’re going to change the operation of the app, however subtly, multiple times a day; if you’re going to introduce features to a small set of users, and then take them away at the end of the day — the experience your users will have is fundamentally different. “Beta” is one way of alerting them to the new regime. My question, though, would be, when would you ever remove a “beta” label if that’s really what it means? Does the “beta” come off when development stops altogether? Or at some point, do you call it “5.0” and put it into maintenance mode?
There’s a lot to say about these practices. A different kind of developer is needed to meet all of these needs; “beta” really seems to mean “2.0,” so maybe some other term would be better than beta; in the rejection of the old ways, what are we gaining and what are we losing? But for now, I’ll just start the catalog and say — I’m impressed with these teams and the way they work. “Continuous integration,” a phrase from the XP world, seems insufficient to describe this kind of software development. The app needs more than to build and pass tests at any time — it needs to be able to go live at a moment’s notice. For some that might be scary, but I think the result is a much more direct and honest connection with the desires of the user. In some ways, this is a realization of the goals many developers had when first coming to the web ten years ago — as last night’s host, Paul Graham, has said, “the web as it was meant to be.”
UPDATE: I added a follow-up covering some of the interesting responses I’ve gotten to this article, particularly around the “developers do QA” section.