Managed DNS considered harmful

Outsourcing your DNS is not a magic bullet.

bridge

There is frequently a tendency toward letting one’s guard down when it comes to threats to your IT systems. Absent an immediate “hair-on-fire” situation, we may relax and assume all is well. Yet malicious activity such as hacking, phishing, malware, and DDoS attacks never stop accelerating in terms of frequency and intensity.

So it’s important to have a “Plan B” DNS solution in place and ready before a crisis hits. That way, even if you’re taken off guard, you still have a backup plan and can respond appropriately.

DNS is one of those things nobody really thinks about, until it stops working. The first time easyDNS went off the air on April 15, 2003, it induced a type of existential crisis in me. That summer, after meditating intensely on the situation, I came away with the conclusion that the centralized managed DNS model, as we understood it then, was doomed.

My response at the time was a proposal to pivot to a DNS appliance with decentralized deployments, but centralized monitoring and management. That concept was promptly shot down my co-founders and we’ve kept on with the centralized, hosted DNS model to this day.

The core problem is this: there are many reasons to elect to outsource your DNS to a managed DNS provider. Those reasons include:

Read more…

How to create a Swarm cluster with Docker

Using Docker Machine to create a Swarm cluster across cloud providers.

Editor’s note: this is an Early Release excerpt from Chapter 7 of Docker Cookbook by Sébastien Goasguen. The recipes in this book will help developers go from zero knowledge to distributed applications packaged and deployed within a couple of chapters. One of the key value propositions of Docker is app portability. The following will show you how to use Docker Machine to create a Swarm cluster across cloud providers.

Problem

You understand how to create a Swarm cluster manually (see Recipe 7.3), but you would like to create one with nodes in multiple public Cloud Providers and keep the UX experience of the local Docker CLI.

Solution

Use Docker Machine to start Docker hosts in several Cloud providers and bootstrap them automatically to create a swarm cluster.

Read more…

How the DevOps revolution informs software architecture

The O'Reilly Radar Podcast: Neal Ford on the changing role of software architects and the rise of microservices.

hans_christian_hansen,_architect_seier+seier_Flickr

In this episode of the Radar Podcast, O’Reilly’s Mac Slocum sits down with Neal Ford, a software architect and meme wrangler at ThoughtWorks, to talk about the changing role of software architects. They met up at our recent Software Architecture Conference in Boston — if you missed the event, you can sign up to be notified when the Complete Video Compilation of all sessions and talks is available.

Slocum started the conversation with the basics: what, exactly, does a software architect do. Ford noted that there’s not a straightforward answer, but that the role really is a “pastiche” of development, soft skills and negotiation, and solving business domain problems. He acknowledged that the role historically has been negatively perceived as a non-coding, post-useful, ivory tower deep thinker, but noted that has been changing over the past five to 10 years as the role has evolved into real-world problem solving, as opposed to operating in abstractions:

“One of the problems in software, I think, is that you build everything on towers of abstractions, and so it’s very easy to get to the point where all you’re doing is playing with abstractions, and you don’t reify that back to the real world, and I think that’s the danger of this kind of ivory-tower architect. When you start looking at things like continuous delivery and continuous deployment, you have to take those operational concerns into account, and I think that is making the role of architect a lot more relevant now, because they are becoming much more involved in the entire software development ecosystem, not just the front edge of it.”

Read more…

Worship maintainers

The future is maintenance: build for the inevitable.

switch

Technology has had a cult of newness for centuries. We hail innovators, cheer change, and fend off critics who might think new and change are coming too fast. Unfortunately, while that drives the cycle of creation, it also creates biases that damage what we create, reducing the benefits and increasing the costs.

Formerly new things rapidly become ordinary “plumbing,” while maintenance becomes a cost center, something to complain about. “Green fields” and startups look ever more attractive because they offer opportunities to start fresh, with minimal connections to past technology decisions.

The problem, though, is that most of these new things — the ones that succeed enough to stay around — have a long maintenance cycle ahead of them. As Axel Rauschmayer put it:

“People who maintain stuff are the unsung heroes of software development.”

In a different context, Steve Hendricks of Historic Doors pointed out that:

“Low maintenance is the holy grail of our culture. We’ve gone so far that we’re willing to throw things away rather than fix them.”

That gets especially expensive. Heaping praise on the creators of new things while trying to minimize the costs of the maintainers is a recipe for disaster over the long term.
Read more…

What should replace the project paradigm?

Creating alignment at scale within enterprises.

bubble_wrap

The problems caused by using the project paradigm to delivering software systems are severe. The effect of projects on downstream teams such as release and operations were, for my money, most succinctly articulated in Evan Bottcher’s article “PROJECTS ARE EVIL AND MUST BE DESTROYED“. The end result — complex, heterogeneous production environments that are hard to change or even keep running — is due to the forces Charles Betz identifies in Architecture and Patterns for IT Service Management, Resource Planning, and Governance: Making Shoes for the Cobbler’s Children:

Because it is the best-understood area of IT activity, the project phase is often optimized at the expense of the other process areas, and therefore at the expense of the entire value chain. The challenge of IT project management is that broader value-chain objectives are often deemed “not in scope” for a particular project, and projects are not held accountable for their contributions to overall system entropy.

Furthermore, bundling work up into projects combines low-value features with high-value features in order to deliver the maximum viable product that is the inevitable result of the large-batch death spiral. This occurs when product owners try and stuff as many features as possible into the next release so they don’t have to wait for the one after in order to get them delivered. As a result, the median cycle time for delivering features is often poorly correlated with their priority — a highly undesirable outcome.

Why do we stick with it? Because our budgeting processes are designed to operate on projects, and project managers and the PMO know how to deliver them.

Since these are clearly poor reasons, what should we do instead?

Read more…

Public vs. private cloud: Price isn’t enough

The risk relative to the savings isn’t enough to justify a shift to public cloud.

Servers_Paul_Hammond_Flickr

This post was originally published on Limn This. The lightly edited version that follows is republished with permission.

Last October, Simon Wardley and I stood on a rainy sidewalk at 28th St. in New York City arguing politely (he’s British) about the future of cloud adoption. He argued, rightly, that the cost advantages from scale would be overwhelming compared to home-brew private clouds. He went on to argue, less certainly in my view, that this would lead inevitably to their wholesale and deep adoption across the enterprise market.

I think Simon bases his argument on something like the rational economic man theory of the enterprise. Or, more specifically, the rational economic chief financial officer (CFO). If the costs of a service provider are destined to be lower than the costs of internally operated alternatives, and your CFO is rational (most tend to be), then the conclusion is foregone.

And, of course, costs are going down just as they are predicted to. Look at this post by Avi Deitcher: Does Amazon’s Web Services Pricing Follow Moore’s Law? I think the question posed in the title has a fairly obvious answer. No. Services aren’t just silicon; they include all manner of linear terms, like labor, so the price decreases will almost certainly be slower than Moore’s Law, but his analysis of the costs of a modestly-sized AWS solution and in-house competition is really useful.

Not only is AWS’ price dropping fast (56% in three years), but it’s significantly cheaper than building and operating a platform in house. Avi does the math for 600 instances over three years and finds that the cost for AWS would be $1.1 million (I don’t think this number considers out-year price decreases) versus $2.3 million for DIY. Your mileage might vary, but these numbers are a nice starting point for further discussion.

These results raise an interesting question: if the numbers are so compelling, why did Walmart just reveal that they are building a ginormous private cloud? Why would anyone? Read more…