Steve Souders: Making Web Sites Faster in the Web 2.0 Age

How huge JavaScript libraries, rich content, and lame ad servers are slowing the web down

O'Reilly Velocity Online Conference 2009As much as anything else, a user’s impression of a web site has to do with how fast the site loads. But modern Web 2.0 websites aren’t your father’s Oldsmobile. Chocked full of rich Flash content and massive JavaScript libraries, they present a new set of challenges to engineers trying to maximized the performance of their sites. You need to design your sites to be Fast by Default. That’s the theme of the upcoming Velocity Online Conference, co-chaired by Google performance guru Steve Souders. Souders is the author of High Performance Web Sites and Even Faster Web Sites, and spent some time discussing the new world of web site performance with me.

James Turner: There’s been an evolution of the whole area of web performance, from the old days when it was all about having a bunch of servers and then doing round robin or just spreading the load out. How is web performance today different than it was, say, ten years ago?

Steve Souders: Well, what’s happened in the last five years or so is that Web 2.0 and DHTML and AJAX has really taken off. And that’s really been in the last two years. Five years ago, we started seeing a lot of flash and bigger images. So basically what’s happened is our web pages have gotten much richer, and that also means heavier. There’s more content being downloaded, more bytes. And then in the last two years, with the explosion of Web 2.0, we’re seeing not only a lot more bytes being downloaded into web pages, but we’re seeing that that involves a lot of complex interaction with JavaScript and CSS. And so whereas maybe five or ten years ago, the main focus was to build back-end architectures that would support these websites, we’re seeing today that we need a lot more focus on the front-end architecture of our websites and web applications.

So that’s where Velocity comes in, and my work comes in. Whereas ten or twenty years ago, you had people looking at collecting and evangelizing best practices for back-end architectures like databases and web servers, Velocity and my work is about identifying and evangelizing best practices for building really fast high-performance front-end architectures.

James Turner: I know, as someone who’s been doing AJAX development, AJAX is a very different kind of paradigm for how you’re interacting with the server. It’s a lot more chatty. Are the current generations of web servers really designed for that kind of interaction?

Steve Souders: I think that the chattiness of AJAX applications isn’t really the issue with regard to performance. I mean, anything can become an issue, but looking across the top 1,000 websites in the world, that’s not the issue. The issue is that these web applications that do AJAX require a lot of JavaScript to set up that framework on the client, inside the browser. And to set that up, to download all of that JavaScript and parse it and execute it just takes a lot of time. A user is downloading complex photos or mail or calendaring application, and before they’ve even done any chatty AJAX request, they’re just waiting for the application to actually load and start up. That’s the frustrating period, you just want to get in and start working on your slides or reading your mail, and you’re waiting for this start up to happen. Typically, once these AJAX frameworks have loaded, the AJAX work that we’re doing in the background is not that big of a problem either from a back-end perspective or from the client side perspective.

James Turner: One of the things we see a lot these days is people using libraries like YUI or Google’s libraries or JQuery . They have compressed versions, but they’re still pretty large. To what extent do you think there’s a need to really go in and pick and choose out of those libraries?

steve-souders-velocity.jpg
Steve Souders: Well, myself personally, I do that frequently because I only need usually one small feature, like I need a carousel or I need a drop-down menu or something like that. And I’ll go to the work of pulling out just the code that I need. But I’m working small website projects. If you’re building a whole web application, you’re probably using many parts of these JavaScript frameworks. There still might be some benefit in pulling out just the pieces that you need. But that’s extra work. And when you need to upgrade, that’s the likelihood of introducing bugs or other problems. So certainly, I wouldn’t avoid doing that. It should be evaluated, pulling out just the JavaScript that you need from the frameworks so long as the licensing even supports that.

But something else that helps address that problem is the Google AJAX Libraries API. This is where Google is actually hosting versions of JQuery and Dojo and YUI and Google’s Closure JavaScript framework and Scriptactulous and EXT and others. What happens is you can have multiple websites that don’t have any interaction with each other, Sears.com and WholeFoods.com, but they both might be using JQuery 1.3.2. And if they’re both downloading that library from the Google AJAX Libraries API, then the URL is exactly the same. So a user that visits multiple of these websites only has to download that JavaScript once during their entire cache session. That further mitigates the need or motivation or benefit of pulling out just the parts that you need.

At first, I didn’t think there would be that much of a critical mass around these people that would adopt the Google AJAX Library CDN and the actual version numbers of these JavaScript frameworks, but it’s actually taken off really well, a lot of websites are using them. Users are actually getting this critical mass benefit where when they go to some website, Sears.com, that’s using JQuery, they already have that in their cache from a visit they made a previous day to a different website. So I think in general, I would recommend to developers that they not change the JavaScript frameworks they’re using. And if they’re using a framework that’s hosted on Google, download it from there. It’s hosted on Google’s infrastructure, so it’s going to be fast, reliable, and users will actually get the benefit of having a greater probability of the framework being in their cache, because multiple websites are taking advantage of loading it from there.

James Turner: I have to put my security hat on for a second and ask, when you get into a situation like that, the flag that comes up for me is if someone managed, by some kind of an injection fake, delivery a version of a library that had vulnerabilities so that it appeared to be coming from Google, you could get into a situation where someone would be using a poison library. Do you think that’s at all a realistic concern?

Steve Souders: Well, I think it depends on who’s doing that. I work at Google so I don’t want to come off as sounding like a fan boy who’s only going to say great things about what Google is doing. I’m as cautious as the next person with what passwords I use and what information I give to any web company. But when it comes to something like this, I’ve built stuff that’s running on Google App Engine or Amazon AWS. It’s always possible that these big companies, these big web hosting providers are going to go down. But there’s probably a greater chance that my website is going to go down than Google or Amazon. And the same thing with security. There’s probably a greater chance that my website is going to be hacked than Google or Amazon. So I think it is a possibility. But I think the odds are pretty small of that happening. And that would not be a concern that would stop me from taking advantage of these services because of the performance benefits I get from them.

James Turner: Staying a little bit on security, there’s certainly been more of an emphasis over the last, pick your unit of years, about making sure that data stays secure. You see things like Open ID now and everything pretty much is SSH’d. How much of a balance do you have to keep between performance and security or can they work in concert?

Steve Souders: I don’t think there is that much tension between performance and security. The things that are making websites slow are not being done in a slow way because people are striving to have greater security. In most of the places where improvement could be made, the improved way, the higher performance way of doing things, doesn’t change your security exposure to any degree.

There are some situations where protecting the users’ data could make a website slower. For example, suppose I’m a mail application and I want to not cache the user’s address book. You want to protect the user’s data. That’s a higher priority than performance. And so you make that address book response non-cacheable. Now that will make the website slower, the web mail application slower the next time the user goes there, but that one request for the user’s inbox and the user’s address book, those small number of requests are insignificant compared to all of the other JavaScript and CSS and images that are being downloaded. So whether you cache that or not, whether you load it over SSL or not, that is not what’s making websites slower. The things that are making websites slower are large amounts of JavaScript, much of which is not used in the initial loading of the page, images that are not being optimized and these other performance best practices that are being evangelized by tools like YSlow and Page Speed.

James Turner: You’ve been focusing a lot on the front-end. I have to say as someone who works largely on the back-end, one of the trends I’ve seen is more and more tiers being layered on, of more and more complexity. Maybe not now, but certainly a few years ago, everything was going to be SOAP over web services with Enterprise Service Buses and all of this very complex multi-tier architecture. Wasn’t that just latency waiting to happen there?

Steve Souders: Well, if you look at most websites, you’ll see that for almost all websites, less than 10 to 20 percent of the page load time is spent getting the HTML document. And that’s largely where this back-end potential for latency lies that you’re talking about. It lies behind generating that HTML document. That HTML document might generate a page that has a few more hits on the back-end for some AJAX responses or some dynamically generated JavaScript, but by and large, that’s where you’re going to have the biggest latency impact from the back-end, from these tiers that you’ve described, it’s generating the HTML document. And yet we see across almost all websites that the HTML document, not just the generation of it, but the request going up to the web server and the response coming back, all of that is less than 10 to 20 percent of the page load time. So it’s certainly not the long pole in the tent. And, in fact, if you could drop the HTML document time to zero, most users wouldn’t notice. It’s that front-end part that is the long pole in the tent.

That’s not true of all websites. It’s probably true of 95 percent of the websites out there. There are certainly a number of websites where the back-end is taking a long time. It’s taking 30 to 50 percent of the overall page load time. The first thing I recommend website owners do is get a handle, get an idea of the overall page load time. And then the second thing I say is break that into two parts: the back-end and the front-end. And if you find that like most websites, your back-end time is less than 10 or 20 percent, then you’re correct in focusing on these front-end best practices. If your back-end time is 30 to 50 percent or more, then you should really start on looking at your back-end architecture.

Perhaps there are some latency delays injected there because of multiple tiers or other web services that you’re using. But there are even some best practices in that situation. For example, flushing the document early is a situation where if you do have a lot of web service requests that are required to generate your HTML document, you can still send back some of your HTML page to the browser early before you start those slow web service calls and give the user some content and some feedback and start the browser doing some work while you continue on with your other back-end requests in processing.

James Turner: One place that I see a lot of slowdown in pages has nothing to do with the site I’m actually visiting, but a lot of times, it appears the ads being served on the page. What’s the state of that right now? It seems like the people who are doing the least to optimize their performance are the ad servers.

Steve Souders: Yeah. Ads being served, third party ads being served inside of web pages, is becoming more of a problem. It’s not that it’s getting worse; it’s that everything else is getting better. When I started this work about five years ago, the percentage of problems that could be associated with ads was pretty small. Maybe 20 percent of the improvements you could make to a page had to do with ads. That’s because most websites weren’t focusing on these other best practices of spriting and ETags and caching. Well, in the last five years, we’ve seen a lot of the major websites adopting these front-end performance best practices. So it’s not that ads have gotten worse; it’s that the actual website content has gotten so much better. Now, of the bad performance practices that you see inside of popular web pages, a much higher percentage, maybe 40 or 50 percent of the problems are introduced because of third party ads.

And it’s not too surprising that that’s happening for two reasons. One is that it’s amazing that this system of serving third party ads works. Here you have two companies that have never spoken to each other. You have two sets of web developers who have never met and never exchanged any plans for how to integrate content. And yet, this third party ad service can serve up content that almost 100 percent of the time loads correctly in some publisher’s page. That’s just an amazing infrastructure that has worked for as long as it has and it continues to work, especially when you consider all of the cross browser compatibility issues that can arise.

So how is it that we were able to achieve that? Well, it’s because we adopted a framework of inserting ads, of creating ads, that’s pretty simple. And because it’s pretty simple, it’s not highly tuned. That’s one reason why we shouldn’t be too surprised that we see performance issues in third party ads. The other reason is that ad services are not focused on technology. Certainly companies like Yahoo and Google and Microsoft, we’re technology companies. We focus on technology. So it’s not surprising that our web developers are on the leading edge of adopting these performance best practices. And it’s also not surprising that ad services might lag two, three or four years behind where these web technology companies are.

I think that’s where we are. We’ve seen a lot of adoption of these performance best practices in these web companies over the last three years and now it’s time for the ad services to catch up. I don’t know what the answer is there. It’s going to require changing this huge infrastructure for serving third party ads, and that’s not something that’s easy to do. Certainly we know things that we can do to make ads serve better, to make them faster, to reduce the impact they have. But getting that to propagate across this huge ecosystem of ads is going to take a long time, a lot of evangelism, a lot of monitoring and a lot of organizational changes. It’s not something that’s going to happen overnight. But hopefully in the next few years, we’ll see progress there.

I know in the last two years, the IAB has established a performance working group and has published some guidelines for how to make ads higher performance. I think we’ll continue to see progress there. And just as we’re seeing with web companies, where companies that have faster web pages are more successful, have greater revenue, reduced operating expenses, the ad services are going to realize that too, that when it comes to ads, faster ads are going to have a competitive advantage. I think, in part, the ecosystem will take care of itself. These ad services will realize that to remain competitive, they need to be fast.

James Turner: Okay. I’m going to wrap up with one last, kind of geeky, question. What is the state-of-the-art in small good-looking images these days?

Steve Souders: Well, when it comes to images, the two experts in the industry are Stoyan Stefanov and Nicole Sullivan. I worked with both of them over in the Yahoo Exceptional Performance Group. And they, in fact, contributed a chapter to my most recent book on optimizing images. So they’re really the experts, but I think I can summarize what they would say. If you have small images with a very few number of colors, GIF is probably going to be the optimal format to use. All of these formats are supported across all browsers. In most cases though, PNG is going to be the format to choose, you can choose PNG 8 for images that are less than 255 colors or PNG 24 for images that are over 255 colors. And if you have a very high resolution image, like a photograph, JPEG is still the optimal format. So your question was about small high-quality images. PNG is going to be the format of choice, most likely, for those types of images.

Planning to attend the O’Reilly Velocity Online Conference on December 8, 2009? Register using code velfall09d25 and receive 25% off the registration price!

tags: , , , , ,