FEATURED STORY

How to create a Swarm cluster with Docker

Using Docker Machine to create a Swarm cluster across cloud providers.

Editor’s note: this is an Early Release excerpt from Chapter 7 of Docker Cookbook by Sébastien Goasguen. The recipes in this book will help developers go from zero knowledge to distributed applications packaged and deployed within a couple of chapters. One of the key value propositions of Docker is app portability. The following will show you how to use Docker Machine to create a Swarm cluster across cloud providers.

Problem

You understand how to create a Swarm cluster manually (see Recipe 7.3), but you would like to create one with nodes in multiple public Cloud Providers and keep the UX experience of the local Docker CLI.

Solution

Use Docker Machine to start Docker hosts in several Cloud providers and bootstrap them automatically to create a swarm cluster.

Warning

This is an experimental feature in Docker Machine and is subject to change.

The first thing to do is to obtain a swarm discovery token. This will be used during the bootstrapping process when starting the nodes of the cluster. As explained in Recipe 7.3, swarm features multiple discovery process. In this recipe, we used the service hosted by Docker, Inc. A discovery token is obtained by running a container based on the swarm image and running the create command. Assuming we do not have access to a Docker host already, we use docker-machine to create one solely for this purpose.

With the token in hand, we can use docker-machine and multiple public Cloud drivers to start worker nodes. We can start a swarm head node on VirtualBox, a worker on DigitalOcean and another one on Azure.

Tip

Do not start a swarm head in a public cloud and a worker on your localhost with VirtualBox. Chances are the head will not be able to route network traffic to your local worker node. It is possible to do, but you would have to open ports on your local router.

Your swarm cluster is now ready. Your swarm head node is running locally in a Virtualbox VM, one worker node is running in DigitalOcean and another one in Azure. You can set the local docker-machine binary to use the head node running in VirtualBox and start using the swarm subcommands:

Discussion

If you start a container, swarm will schedule it in round-robin fashion on the cluster. For example, starting three nginx container in a for loop with:

Will lead to three nginx container on the three nodes in your cluster. Remember that you will need to open port 80 on the instances running in the Cloud to access the container.

Tip

Do not forget to remove the machine you started in the Cloud.

See Also

  • Using Docker machine with Docker swarm.


Editor’s note: if you’re interested in learning more about networking at scale, you’ll want to check out Jay Edwards’ Distributed Systems training session at Velocity in Santa Clara May 27-29, 2015.

Comment

Worship maintainers

The future is maintenance: build for the inevitable.

switch

Technology has had a cult of newness for centuries. We hail innovators, cheer change, and fend off critics who might think new and change are coming too fast. Unfortunately, while that drives the cycle of creation, it also creates biases that damage what we create, reducing the benefits and increasing the costs.

Formerly new things rapidly become ordinary “plumbing,” while maintenance becomes a cost center, something to complain about. “Green fields” and startups look ever more attractive because they offer opportunities to start fresh, with minimal connections to past technology decisions.

The problem, though, is that most of these new things — the ones that succeed enough to stay around — have a long maintenance cycle ahead of them. As Axel Rauschmayer put it:

“People who maintain stuff are the unsung heroes of software development.”

In a different context, Steve Hendricks of Historic Doors pointed out that:

“Low maintenance is the holy grail of our culture. We’ve gone so far that we’re willing to throw things away rather than fix them.”

That gets especially expensive. Heaping praise on the creators of new things while trying to minimize the costs of the maintainers is a recipe for disaster over the long term.
Read more…

Comment: 1

What should replace the project paradigm?

Creating alignment at scale within enterprises.

bubble_wrap

The problems caused by using the project paradigm to delivering software systems are severe. The effect of projects on downstream teams such as release and operations were, for my money, most succinctly articulated in Evan Bottcher’s article “PROJECTS ARE EVIL AND MUST BE DESTROYED“. The end result — complex, heterogeneous production environments that are hard to change or even keep running — is due to the forces Charles Betz identifies in Architecture and Patterns for IT Service Management, Resource Planning, and Governance: Making Shoes for the Cobbler’s Children:

Because it is the best-understood area of IT activity, the project phase is often optimized at the expense of the other process areas, and therefore at the expense of the entire value chain. The challenge of IT project management is that broader value-chain objectives are often deemed “not in scope” for a particular project, and projects are not held accountable for their contributions to overall system entropy.

Furthermore, bundling work up into projects combines low-value features with high-value features in order to deliver the maximum viable product that is the inevitable result of the large-batch death spiral. This occurs when product owners try and stuff as many features as possible into the next release so they don’t have to wait for the one after in order to get them delivered. As a result, the median cycle time for delivering features is often poorly correlated with their priority — a highly undesirable outcome.

Why do we stick with it? Because our budgeting processes are designed to operate on projects, and project managers and the PMO know how to deliver them.

Since these are clearly poor reasons, what should we do instead?

Read more…

Comment: 1

Public vs. private cloud: Price isn’t enough

The risk relative to the savings isn’t enough to justify a shift to public cloud.

Servers_Paul_Hammond_Flickr

This post was originally published on Limn This. The lightly edited version that follows is republished with permission.

Last October, Simon Wardley and I stood on a rainy sidewalk at 28th St. in New York City arguing politely (he’s British) about the future of cloud adoption. He argued, rightly, that the cost advantages from scale would be overwhelming compared to home-brew private clouds. He went on to argue, less certainly in my view, that this would lead inevitably to their wholesale and deep adoption across the enterprise market.

I think Simon bases his argument on something like the rational economic man theory of the enterprise. Or, more specifically, the rational economic chief financial officer (CFO). If the costs of a service provider are destined to be lower than the costs of internally operated alternatives, and your CFO is rational (most tend to be), then the conclusion is foregone.

And, of course, costs are going down just as they are predicted to. Look at this post by Avi Deitcher: Does Amazon’s Web Services Pricing Follow Moore’s Law? I think the question posed in the title has a fairly obvious answer. No. Services aren’t just silicon; they include all manner of linear terms, like labor, so the price decreases will almost certainly be slower than Moore’s Law, but his analysis of the costs of a modestly-sized AWS solution and in-house competition is really useful.

Not only is AWS’ price dropping fast (56% in three years), but it’s significantly cheaper than building and operating a platform in house. Avi does the math for 600 instances over three years and finds that the cost for AWS would be $1.1 million (I don’t think this number considers out-year price decreases) versus $2.3 million for DIY. Your mileage might vary, but these numbers are a nice starting point for further discussion.

These results raise an interesting question: if the numbers are so compelling, why did Walmart just reveal that they are building a ginormous private cloud? Why would anyone? Read more…

Comments: 10

Empathy is a state of mind, not a specific technique

A design process paved with empathic observations will lead you, slowly and iteratively, to a better product.

Editor’s note: this post was originally published on the author’s blog, Exploring the world beyond mobile; this lightly edited version is republished here with permission.

steel_framing_Julien_Belli_Flickr

If I’m ever asked what’s most important in UX design, I always reply “empathy.” It’s the core meta attribute, the driver that motivates everything else. Empathy encourages you to understand who uses your product, forces you to ask deeper questions, and motivates the many redesigns you go through to get a product right.

But empathy is a vague concept that isn’t strongly appreciated by others. There have been times when talking to product managers that my empathy-driven fix-it list will get a response like, “We appreciate that Scott, but we have so much to get done on the product, we don’t have time to tweak things like that right now.” Never do you feel so put in your place when someone says that your job is “tweaking.”

The paradox of empathy is that while it drives us at a very deep level, and ultimately leads us to big, important insights, it usually starts small. The empathic process typically notices simple things like ineffective error messages, observed user workarounds, or overly complicated dialog boxes. Empathy starts with very modest steps. However, these small observations are the wedge that splits the log; it’s these initial insights, if you follow them far enough, that open up your mind and lead you to great products.

Read more…

Comment

7 user research myths and mistakes

Finding the holes in qualitative and quantitative testing.

testing
I can’t tell you how often I hear things from engineers like, “Oh, we don’t have to do user testing. We’ve got metrics.” Of course, you can almost forgive them when the designers are busy saying things like, “Why would we A/B test this new design? We know it’s better!”

In the debate over whether to use qualitative or quantitative research methods, there is plenty of wrong to go around. So, let’s look at some of the myths surrounding qualitative and quantitative research, and the most common mistakes people make when trying to use them.
Read more…

Comments: 2