This is part of the Velocity Profiles series, which highlights the work and knowledge of web ops and performance experts.
How did you get into web operations and performance?
Out of school, I ended up with one of the first load balancing vendors, which is where I learned about everything that has to do with the
networking and protocol side of the web. From there, I kind of moved up the stack: I helped found a high-performance caching company; then joined
a next generation ADC vendor focusing on web acceleration; and then hooked up with Strangeloop, where we focus on advanced front-end optimization
(FEO) for web performance. It’s been a pretty cool ride, and I’m still learning every day.
What is your most memorable project?
Two come to mind. About 10 years ago, we had a huge problem trying to solve network proximity problems with geo load balancing. The normal DNS-based solutions weren’t good enough. We came up with a pretty clever and more accurate way of measuring network proximity. It’s a solution I’m still pretty proud of. More recently, and in a completely different direction, I’ve been involved with projects where we’re leveraging the power of Google Analytics in creative ways to keep track of user behavior when it comes to web performance. It’s kind of like what Artur Bergman talked about last year at Velocity, but we’ve gone further and included more things that give us different types of insight. It’s a great example of positively exploiting available tools in new and cheeky ways.
What’s the toughest problem you’ve had to solve?
In the world of web performance, measurement remains a huge challenge. There are way too many tools, metrics, and vendors out
there, all doing measurement differently, and ironically, all legit! So, the challenge isn’t always finding the right thing to measure, it’s to
understand which subset of metrics to consider based on the situation. Add to that the fact that there’s a lot of confusion about this propagated
by everyone who thinks their way is the only right way, coupled with the possibility that we may not actually have the right measurement yet, and
this becomes an incredibly complex issue. I can’t say that we’ve solved it, but I do keep finding myself learning new things and educating people
about these complexities. So, the fact that people are listening and wanting to learn is a positive step toward solving the problem.
What tools and techniques do you rely on most?
I’m a techy geek, so my favorite tools are those that help me be a good sleuth. At the lowest level, the one tool I can’t live without is
Wireshark for getting and studying network packet captures. We’ve dubbed it “The Truth Serum” because of how it proves itself to be the ultimate authority when it comes to figuring out what the hell is going on.
In the browser, I use HTTPWatch all the time to study how my browser processes web pages. It’s an excellent tool for getting timings, object breakdowns, and HTTP details.
And my favorite performance tool is WebPagetest. I use the public version, we have a private instance, and I even have one on my laptop. It’s an awesome tool for getting an as-close-to-accurate-as-possible reflection of how pages perform in real browsers, with real-world characteristics. It has provided, and continues to provide, a great service for the performance industry.