How to create a Swarm cluster with Docker

Using Docker Machine to create a Swarm cluster across cloud providers.

Editor’s note: this is an Early Release excerpt from Chapter 7 of Docker Cookbook by Sébastien Goasguen. The recipes in this book will help developers go from zero knowledge to distributed applications packaged and deployed within a couple of chapters. One of the key value propositions of Docker is app portability. The following will show you how to use Docker Machine to create a Swarm cluster across cloud providers.


You understand how to create a Swarm cluster manually (see Recipe 7.3), but you would like to create one with nodes in multiple public Cloud Providers and keep the UX experience of the local Docker CLI.


Use Docker Machine to start Docker hosts in several Cloud providers and bootstrap them automatically to create a swarm cluster.


This is an experimental feature in Docker Machine and is subject to change.

The first thing to do is to obtain a swarm discovery token. This will be used during the bootstrapping process when starting the nodes of the cluster. As explained in Recipe 7.3, swarm features multiple discovery process. In this recipe, we used the service hosted by Docker, Inc. A discovery token is obtained by running a container based on the swarm image and running the create command. Assuming we do not have access to a Docker host already, we use docker-machine to create one solely for this purpose.

$ ./docker-machine create -d virtualbox local
INFO[0000] Creating SSH key...
INFO[0042] To point your Docker client at it, run this in your shell: $(docker-machine env local)
$ $(docker-machine env local)
$ docker run swarm create

With the token in hand, we can use docker-machine and multiple public Cloud drivers to start worker nodes. We can start a swarm head node on VirtualBox, a worker on DigitalOcean and another one on Azure.


Do not start a swarm head in a public cloud and a worker on your localhost with VirtualBox. Chances are the head will not be able to route network traffic to your local worker node. It is possible to do, but you would have to open ports on your local router.

$ docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://31e61710169a7d3568502b0e9fb09d66 head
INFO[0000] Creating SSH key...
INFO[0069] To point your Docker client at it, run this in your shell: $(docker-machine env head)
$ docker-machine create -d digitalocean --swarm --swarm-discovery token://31e61710169a7d3568502b0e9fb09d66 worker-00
$ docker-machine create -d azure --swarm --swarm-discovery token://31e61710169a7d3568502b0e9fb09d66 swarm-worker-01

Your swarm cluster is now ready. Your swarm head node is running locally in a Virtualbox VM, one worker node is running in DigitalOcean and another one in Azure. You can set the local docker-machine binary to use the head node running in VirtualBox and start using the swarm subcommands:

$ $(docker-machine env --swarm head)
$ docker info
Containers: 4
Nodes: 3
  └ Containers: 2
  └ Reserved CPUs: 0 / 4
  └ Reserved Memory: 0 B / 999.9 MiB
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 490 MiB
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.639 GiB


If you start a container, swarm will schedule it in round-robin fashion on the cluster. For example, starting three nginx container in a for loop with:

$ for i in `seq 1 3`;do docker run -d -p 80:80 nginx;done

Will lead to three nginx container on the three nodes in your cluster. Remember that you will need to open port 80 on the instances running in the Cloud to access the container.

$ docker ps
CONTAINER ID    IMAGE       COMMAND                ... PORTS                                NAMES
9bff07d8ee18    nginx:1.7   "nginx -g 'daemon of   ... 443/tcp,>80/tcp   swarm-worker-01/loving_torvalds
457ed59c9bb3    nginx:1.7   "nginx -g 'daemon of   ... 443/tcp,>80/tcp    worker-00/drunk_swartz
6013be18cdbc    nginx:1.7   "nginx -g 'daemon of   ... 443/tcp,>80/tcp   head/condescending_galileo


Do not forget to remove the machine you started in the Cloud.

See Also

  • Using Docker machine with Docker swarm.

Editor's note: if you're interested in learning more about networking at scale, you'll want to check out Jay Edwards' Distributed Systems training session at Velocity in Santa Clara May 27-29, 2015.

tags: , , , , , , , ,