"Velocity" entries

A “have-coffee” culture

Tending the DevOps victory garden.

Download a free copy of Building an Optimized Business, a curated collection of chapters from the O’Reilly Web Operations and Performance library. This post is an excerpt by J. Paul Reed from DevOps in Practice, one of the selections included in the curated collection.

Any discussion surrounding DevOps and its methodologies quickly comes to the often delicate issue of organizational dynamics and culture, at least if it’s an accurate treatment of the topic. There is often a tendency to downplay or gloss over these issues precisely because culture is thought of as a “squishy” thing, difficult to shape and change, and in some cases, to even address directly. But it doesn’t need to be this way.

Sam Hogenson, Vice President of Technology at Nordstrom, works hard to make sure it’s exactly the opposite: “At Nordstrom, we value these different experiences and we value the core of how you work, how you build relationships much more than whether or not you have subject matter expertise. It’s a successful formula.” Another part of that formula, Hogenson notes, is the ethos of the organization: “It’s a very empowered workforce, a very decentralized organization; I always remember the Nordstroms telling us ‘Treat this as if it were your name over the door: how would you run your business and take care of your customers?'” [Nordstrom infrastructure engineer Doug] Ireton described it as a “have-coffee culture: if you need to talk to someone, you go have coffee with them.”

Read more…

Comment

How we got to the HTTP/2 and HPACK RFCs

A brief history of SPDY and HTTP/2.

Download HTTP/2.

Download HTTP/2.

Download a free copy of HTTP/2. This post is an excerpt by Ilya Grigorik from High Performance Browser Networking, the essential guide to networking and web performance for web developers.

SPDY was an experimental protocol, developed at Google and announced in mid-2009, whose primary goal was to try to reduce the load latency of web pages by addressing some of the well-known performance limitations of HTTP/1.1. Specifically, the outlined project goals were set as follows:

  • Target a 50% reduction in page load time (PLT).
  • Avoid the need for any changes to content by website authors.
  • Minimize deployment complexity, avoid changes in network infrastructure.
  • Develop this new protocol in partnership with the open-source community.
  • Gather real performance data to (in)validate the experimental protocol.

To achieve the 50% PLT improvement, SPDY aimed to make more efficient use of the underlying TCP connection by introducing a new binary framing layer to enable request and response multiplexing, prioritization, and header compression.


Not long after the initial announcement, Mike Belshe and Roberto Peon, both software engineers at Google, shared their first results, documentation, and source code for the experimental implementation of the new SPDY protocol:

So far we have only tested SPDY in lab conditions. The initial results are very encouraging: when we download the top 25 websites over simulated home network connections, we see a significant improvement in performance—pages loaded up to 55% faster.

— A 2x Faster Web Chromium Blog

Fast-forward to 2012 and the new experimental protocol was supported in Chrome, Firefox, and Opera, and a rapidly growing number of sites, both large (e.g. Google, Twitter, Facebook) and small, were deploying SPDY within their infrastructure. In effect, SPDY was on track to become a de facto standard through growing industry adoption.

Read more…

Comment

Ghosts in the machines

The secret to successful infrastructure automation is people.

ghost_machine

“The trouble with automation is that it often gives us what we don’t need at the cost of what we do.” —Nicholas Carr, The Glass Cage: Automation and Us

Virtualization and cloud hosting platforms have pervasively decoupled infrastructure from its underlying hardware over the past decade. This has led to a massive shift towards what many are calling dynamic infrastructure, wherein infrastructure and the tools and services used to manage it are treated as code, allowing operations teams to adopt software approaches that have dramatically changed how they operate. But with automation comes a great deal of fear, uncertainty and doubt.

Common (mis)perceptions of automation tend to pop up at the extreme ends: It will either liberate your people to never have to worry about mundane tasks and details, running intelligently in the background, or it will make SysAdmins irrelevant and eventually replace all IT jobs (and beyond). Of course, the truth is very much somewhere in between, and relies on a fundamental rethinking of the relationship between humans and automation.
Read more…

Comment: 1

The two-sided coin of Web performance

Hacking performance across your organization.

I’ve given Web performance talks where I get to show one of my favorite slides with the impact of third-party dependencies on load time. It’s the perfect use case for “those marketing people,” who overload pages with the tracking pixels and tags that make page load time go south. This, of course, would fuel the late-night pub discussrion with fellow engineers about how much faster the Web would be if those marketing people would attend a basic Web performance 101 course.

I’ve also found myself discussing exactly this topic in a meeting. This time, however, I was the guy arguing to keep the tracking code, although I was well aware of the performance impact. So what happened?
Read more…

Comment

How to leverage the browser cache with a CDN

An introduction to multi-level caching.

maze

Since a content delivery network (CDN) is essentially a cache, you might be tempted not to make use of the cache in the browser, to avoid complexity. However, each cache has its own advantages that the other does not provide. In this post I will explain what the advantages of each are, and how to combine the two for the most optimal performance of your website.

Why use both?

While CDNs do a good job of delivering assets very quickly, they can’t do much about users who are out in the boonies and barely have a single bar of reception on their phone. As a matter of fact, in the US, the 95th percentile for the round trip time (RTT) to all CDNs is well in excess of 200 milliseconds, according to Cedexis reports. That means at least 5% of your users, if not more, are likely to have a slow experience with your website or application. For reference, the 50th percentile, or median, RTT is around 45 milliseconds.

So why bother using a CDN at all? Why not just rely on the browser cache?

Read more…

Comment

From Industrialism to Post-Industrialism

Leveraging the power of emergence to balance flexibility with coherency.

Download a free copy of Building an Optimized Business, a curated collection of chapters from the O’Reilly Web Operations and Performance library. This post is an excerpt by Jeff Sussna from Designing Delivery, one of the selections included in the curated collection.

In 1973, Daniel Bell published a book called “The Coming of Post-Industrial Society”. In it, he posited a seismic shift away from industrialism towards a new socioeconomic structure which he named ‘post-industrialism’. Bell identified four key transformations that he believed would characterize the emergence of post-industrial society:

  • Service would replace products as the primary driver of economic activity
  • Work would rely on knowledge and creativity rather than bureaucracy or manual labor
  • Corporations, which had previously strived for stability and continuity, would discover change and innovation as their underlying purpose
  • These three transformations would all depend on the pervasive infusion of computerization into business and daily life

If Bell’s description of the transition from industrialism to post-industrialism sounds eerily familiar, it should. We are just now living through its fruition. Every day we hear proclamations touting the arrival of the service economy. Service sector employment has outstripped product sector employment throughout the developed world. 1

Companies are recognizing the importance of the customer experience. Drinking coffee has become as much about the bar and the barista as about the coffee itself. Owning a car has become as much about having it serviced as about driving it. New disciplines such as service design are emerging that use design techniques to improve customer satisfaction throughout the service experience.

Read more…

Comment
Four short links: 7 April 2015

Four short links: 7 April 2015

JavaScript Numeric Methods, Misunderstood Statistics, Web Speed, and Sentiment Analysis

  1. NumericJS — numerical methods in JavaScript.
  2. P Values are not Error Probabilities (PDF) — In particular, we illustrate how this mixing of statistical testing methodologies has resulted in widespread confusion over the interpretation of p values (evidential measures) and α levels (measures of error). We demonstrate that this confusion was a problem between the Fisherian and Neyman–Pearson camps, is not uncommon among statisticians, is prevalent in statistics textbooks, and is well nigh universal in the pages of leading (marketing) journals. This mass confusion, in turn, has rendered applications of classical statistical testing all but meaningless among applied researchers.
  3. Breaking the 1000ms Time to Glass Mobile Barrier (YouTube) —
    See also slides. Stay under 250 ms to feel “fast.” Stay under 1000 ms to keep users’ attention.
  4. Modern Methods for Sentiment AnalysisRecently, Google developed a method called Word2Vec that captures the context of words, while at the same time reducing the size of the data. Gentle introduction, with code.
Comment: 1

How to create a Swarm cluster with Docker

Using Docker Machine to create a Swarm cluster across cloud providers.

Editor’s note: this is an Early Release excerpt from Chapter 7 of Docker Cookbook by Sébastien Goasguen. The recipes in this book will help developers go from zero knowledge to distributed applications packaged and deployed within a couple of chapters. One of the key value propositions of Docker is app portability. The following will show you how to use Docker Machine to create a Swarm cluster across cloud providers.

Problem

You understand how to create a Swarm cluster manually (see Recipe 7.3), but you would like to create one with nodes in multiple public Cloud Providers and keep the UX experience of the local Docker CLI.

Solution

Use Docker Machine to start Docker hosts in several Cloud providers and bootstrap them automatically to create a swarm cluster.

Read more…

Comments: 2
Four short links: 24 March 2015

Four short links: 24 March 2015

Tricorder Prototype, Web Performance, 3D Licensing, and Network Simulation

  1. Tricorder Prototypecollar+earpiece, base station, diagnostic stick (lab tests for diabetes, pneumonia, tb, etc), and scanning wand (examine lesions, otoscope for ears, even spirometer). (via Slashdot)
  2. Souders Joins SpeedcurveDuring these engagements, I’ve seen that many of these companies don’t have the necessary tools to help them identify how performance is impacting (hurting) the user experience on their websites. There is even less information about ways to improve performance. The standard performance metric is page load time, but there’s often no correlation between page load time and the user’s experience. We need to shift from network-based metrics to user experience metrics that focus on rendering and when content becomes available. That’s exactly what Mark is doing at SpeedCurve, and why I’m excited to join him.
  3. 3 Steps for Licensing Your 3D-Printed Stuff (PDF) — this paper is not actually about choosing the right license for your 3D printable stuff (sorry about that). Instead, this paper aims to flesh out a copyright analysis for both physical objects and for the digital files that represent them, allowing you to really understand what parts of your 3D object you are—and are not—licensing. Understanding what you are licensing is key to choosing the right license. Simply put, this is because you cannot license what you do not legally control in the first place. There is no point in considering licenses that ultimately do not have the power to address whatever behavior you’re aiming to control. However, once you understand what it is you want to license, choosing the license itself is fairly straightforward. (via BoingBoing)
  4. Augmented Traffic Control — Facebook’s tool for simulating degraded network conditions.
Comment
Four short links: 20 February 2015

Four short links: 20 February 2015

Robotic Garden, Kids Toys, MSFT ML, and Twitter Scale

  1. The Distributed Robotic Garden (MIT) — We consider plants, pots, and robots to be systems with different levels of mobility, sensing, actuation, and autonomy. (via Robohub)
  2. CogniToys Leverages Watson’s Brain to Befriend, Teach Your Kids (IEEE) — Through the dino, Watson’s algorithms can get to know each child that it interacts with, tailoring those interactions to the child’s age and interests.
  3. How Machine Learning Ate Microsoft (Infoworld) — Azure ML didn’t merely take the machine learning algorithms MSR had already handed over to product teams and stick them into a drag-and-drop visual designer. Microsoft has made the functionality available to developers who know the R statistical programming language and Python, which together are widely used in academic machine learning. Microsoft plans to integrate Azure ML closely with Revolution Analytics, the R startup it recently acquired.
  4. Handling Five Billion Sessions a Day in Real Time (Twitter) — infrastructure porn.
Comments: 2