Going beyond Onload: Measuring performance that matters

Velocity 2013 Speaker Series: Focus on Web Apps, Not Web Pages

We’re not making web pages anymore; we’re building web applications. Gone are the days of a few script tags in the <head>. Apps today are a complex web of asynchronously-loaded content and functionality. In the past decade, we’ve progressed from statically-loaded HTML to AJAX-ifying all the things. However, the way we’ve been measuring real user performance of our apps hasn’t changed to reflect our new state of art.

Defining “Done”

At what point during page load do users consider an app to be “ready enough” to start using? If we use standard performance metrics, we have to choose one of the following:

1) When the HTML document has been completely loaded and parsed, but before stylesheets, images, and subframes have finished loading (DOMContentLoaded)

2) When all synchronous scripts, stylesheets, images, and subframes have finished loading (onload)

If we pick DOMContentLoaded, it quickly becomes clear that there’s no inherent correlation between the app state at that point and what a user would consider “ready.”

These two apps below at DOMContentLoaded don’t look quite ready yet, do they:

facebook Facebook at DOMContentLoaded

eBay at DOMContentLoaded

If we pick onload instead, we’re at the mercy of our slowest assets which are often nonessential elements such as extra images, iframe ads, and social media widgets. In addition, onload doesn’t include asynchronously-loaded content or functionality. Phooey.

Clearly, DOMContentLoaded and onload don’t cut it. We need a more appropriate measurement methodology.

User-ready Is Contextual

The state at which a user thinks an app is ready to use, or “user-ready”, depends on the context of each app. For example, an online store’s product detail page is user-ready when the main photo, price, description, and “buy” button are visible and functional. A social news feed is user-ready when the first few posts are visible and the user can submit a new post.

The conditions of user-ready are different for every app and even for different parts of the same app. The way we measure end-user performance should be just as nuanced and contextual. In other words, let’s measure how long it takes to reach user-ready.

Diving in

First, capture the timestamp of the initial request using the Navigation Timing API or a rough approximation. In addition, log the server round trip time and pass it to the client:

server = /* server-side render time inserted here */
start = window.performance && window.performance.timing
? window.performance.timing.navigationStart
: +new Date();

Next, implement the user-ready conditions for each context. From the previous example: the online store’s product detail page is user-ready when the main photo, price, description, and “buy” button have been loaded and are functional:

$('.product-detail').html( MVC.getView('product-detail') );
$('.buy').on('click', function() { ... });
$('.main-photo').load(function() {

function logUserReady() {
var now = +new Date(),
elapsed = server + (now – start);

url: “/log/user-ready”,
data: elapsed

Measuring performance accurately is the first step towards improving it. By logging the user-ready metrics for each context, we have a much more realistic picture of how users actually perceive an app.

But how can we leverage these user-ready metrics to improve end-user performance? At the Going Beyond onload: How Fast Does It Feel? panel at Velocity NYC next week, we’ll show how user-ready can be leveraged as more than just a measurement tool.

This is one of a series of posts related to the upcoming Velocity conference in New York City (Oct 14-16). We hope to see you there.

tags: , , , ,

Get the O’Reilly Systems Engineering and Operations Newsletter

Get weekly insight from industry insiders—plus exclusive content, offers, and more on the topics of systems engineering and operations.