Applied DevOps and the potential of Docker

The cultural impact within a software engineering organization can be dramatic.

Editor’s note: this post is from Karl Matthias and Sean P. Kane, authors of “Docker Up & Running,” a guide to quickly learn how to use Docker to create packaged images for easy management, testing, and deployment of software.

At the Python Developers Conference in Santa Clara, California, on March 15th, 2013, with no pre-announcement and little fanfare, Solomon Hykes, the founder and CEO of dotCloud, gave a 5-minute lightning talk where he first introduced the world to a brand new tool for Linux called Docker. It was a response to the hardships of shipping software at scale in a fast-paced world, and takes an approach that makes it easy to map organizational processes to the principles of DevOps.

The capabilities of the typical software engineering company have often not kept pace with the quickly evolving expectations of the average technology user. Users today expect fast, reliable systems with continuous improvements, ease of use, and broad integrations. Many in the industry see the principles of DevOps as a giant leap toward building organizations that meet the challenges of delivering high quality software in today’s market. Docker is aimed at these challenges.

Read more…

A “have-coffee” culture

Tending the DevOps victory garden.

Download a free copy of Building an Optimized Business, a curated collection of chapters from the O’Reilly Web Operations and Performance library. This post is an excerpt by J. Paul Reed from DevOps in Practice, one of the selections included in the curated collection.

Any discussion surrounding DevOps and its methodologies quickly comes to the often delicate issue of organizational dynamics and culture, at least if it’s an accurate treatment of the topic. There is often a tendency to downplay or gloss over these issues precisely because culture is thought of as a “squishy” thing, difficult to shape and change, and in some cases, to even address directly. But it doesn’t need to be this way.

Sam Hogenson, Vice President of Technology at Nordstrom, works hard to make sure it’s exactly the opposite: “At Nordstrom, we value these different experiences and we value the core of how you work, how you build relationships much more than whether or not you have subject matter expertise. It’s a successful formula.” Another part of that formula, Hogenson notes, is the ethos of the organization: “It’s a very empowered workforce, a very decentralized organization; I always remember the Nordstroms telling us ‘Treat this as if it were your name over the door: how would you run your business and take care of your customers?'” [Nordstrom infrastructure engineer Doug] Ireton described it as a “have-coffee culture: if you need to talk to someone, you go have coffee with them.”

Read more…

How we got to the HTTP/2 and HPACK RFCs

A brief history of SPDY and HTTP/2.

Download HTTP/2.

Download HTTP/2.

Download a free copy of HTTP/2. This post is an excerpt by Ilya Grigorik from High Performance Browser Networking, the essential guide to networking and web performance for web developers.

SPDY was an experimental protocol, developed at Google and announced in mid-2009, whose primary goal was to try to reduce the load latency of web pages by addressing some of the well-known performance limitations of HTTP/1.1. Specifically, the outlined project goals were set as follows:

  • Target a 50% reduction in page load time (PLT).
  • Avoid the need for any changes to content by website authors.
  • Minimize deployment complexity, avoid changes in network infrastructure.
  • Develop this new protocol in partnership with the open-source community.
  • Gather real performance data to (in)validate the experimental protocol.

To achieve the 50% PLT improvement, SPDY aimed to make more efficient use of the underlying TCP connection by introducing a new binary framing layer to enable request and response multiplexing, prioritization, and header compression.


Not long after the initial announcement, Mike Belshe and Roberto Peon, both software engineers at Google, shared their first results, documentation, and source code for the experimental implementation of the new SPDY protocol:

So far we have only tested SPDY in lab conditions. The initial results are very encouraging: when we download the top 25 websites over simulated home network connections, we see a significant improvement in performance—pages loaded up to 55% faster.

— A 2x Faster Web Chromium Blog

Fast-forward to 2012 and the new experimental protocol was supported in Chrome, Firefox, and Opera, and a rapidly growing number of sites, both large (e.g. Google, Twitter, Facebook) and small, were deploying SPDY within their infrastructure. In effect, SPDY was on track to become a de facto standard through growing industry adoption.

Read more…

Deploy Continuous Improvement

Balancing the work it takes to improve capability against delivery work that provides value to customers.

Download a free copy of Building an Optimized Business, a curated collection of chapters from the O’Reilly Web Operations and Performance library. This post is an excerpt by Jez Humble, Joanne Molesky, and Barry O’Reilly from Lean Enterprise, one of the selections included in the curated collection.

In most enterprises, there is a distinction between the people who build and run software systems (often referred to as “IT”) and those who decide what the software should do and make the investment decisions (often called “the business”). These names are relics of a bygone age in which IT was considered a cost necessary to improve efficiencies of the business, not a creator of value for external customers by building products and services. These names and the functional separation have stuck in many organizations (as has the relationship between them, and the mindset that often goes with the relationship). Ultimately, we aim to remove this distinction. In high-performance organizations today, people who design, build, and run software-based products are an integral part of business; they are given — and accept — responsibility for customer outcomes. But getting to this state is hard, and it’s all too easy to slip back into the old ways of doing things.

Read more…

Ghosts in the machines

The secret to successful infrastructure automation is people.

ghost_machine

“The trouble with automation is that it often gives us what we don’t need at the cost of what we do.” —Nicholas Carr, The Glass Cage: Automation and Us

Virtualization and cloud hosting platforms have pervasively decoupled infrastructure from its underlying hardware over the past decade. This has led to a massive shift towards what many are calling dynamic infrastructure, wherein infrastructure and the tools and services used to manage it are treated as code, allowing operations teams to adopt software approaches that have dramatically changed how they operate. But with automation comes a great deal of fear, uncertainty and doubt.

Common (mis)perceptions of automation tend to pop up at the extreme ends: It will either liberate your people to never have to worry about mundane tasks and details, running intelligently in the background, or it will make SysAdmins irrelevant and eventually replace all IT jobs (and beyond). Of course, the truth is very much somewhere in between, and relies on a fundamental rethinking of the relationship between humans and automation.
Read more…

The two-sided coin of Web performance

Hacking performance across your organization.

I’ve given Web performance talks where I get to show one of my favorite slides with the impact of third-party dependencies on load time. It’s the perfect use case for “those marketing people,” who overload pages with the tracking pixels and tags that make page load time go south. This, of course, would fuel the late-night pub discussrion with fellow engineers about how much faster the Web would be if those marketing people would attend a basic Web performance 101 course.

I’ve also found myself discussing exactly this topic in a meeting. This time, however, I was the guy arguing to keep the tracking code, although I was well aware of the performance impact. So what happened?
Read more…