"containers" entries

Swarm v. Fleet v. Kubernetes v. Mesos

Comparing different orchestration tools.

Buy Using Docker Early Release.

Buy Using Docker Early Release.

Most software systems evolve over time. New features are added and old ones pruned. Fluctuating user demand means an efficient system must be able to quickly scale resources up and down. Demands for near zero-downtime require automatic fail-over to pre-provisioned back-up systems, normally in a separate data centre or region.

On top of this, organizations often have multiple such systems to run, or need to run occasional tasks such as data-mining that are separate from the main system, but require significant resources or talk to the existing system.

When using multiple resources, it is important to make sure they are efficiently used — not sitting idle — but can still cope with spikes in demand. Balancing cost-effectiveness against the ability to quickly scale is difficult task that can be approached in a variety of ways.

All of this means that the running of a non-trivial system is full of administrative tasks and challenges, the complexity of which should not be underestimated. It quickly becomes impossible to look after machines on an individual level; rather than patching and updating machines one-by-one they must be treated identically. When a machine develops a problem it should be destroyed and replaced, rather than nursed back to health.

Various software tools and solutions exist to help with these challenges. Let’s focus on orchestration tools, which help make all the pieces work together, working with the cluster to start containers on appropriate hosts and connect them together. Along the way, we’ll consider scaling and automatic failover, which are important features.

Read more…

Boost your career with new levels of automation

Elevate automation through orchestration.

orchestrate

As sysadmins we have been responsible for running applications for decades. We have done everything to meet demanding SLAs including “automating all the things” and even trading sleep cycles to recuse applications from production fires. While we have earned many battle scars and can step back and admire fully automated deployment pipelines, it feels like there has always been something missing. Our infrastructure still feels like an accident waiting to happen and somehow, no matter how much we manage to automate, the expense of infrastructure continues to increase.

The root of this feeling comes from the fact that many of our tools don’t provide the proper insight into what’s really going on and require us to reverse engineer applications in order to effectively monitor them and recover from failures. Today many people bolt on monitoring solutions that attempt to probe applications from the outside and report “health” status to a centralized monitoring system, which seems to be riddled with false alarms or a list of alarms that are not worth looking into because there is no clear path to resolution.

What makes this worse is how we typically handle common failure scenarios such as node failures. Today many of us are forced to statically assign applications to machines and manage resource allocations on a spreadsheet. It’s very common to assign a single application to a VM to avoid dependency conflicts and ensure proper resource allocations. Many of the tools in our tool belt have be optimized for this pattern and the results are less than optimal. Sure this is better than doing it manually, but current methods are resulting in low resource utilization, which means our EC2 bills continue to increase — because the more you automate, the more things people want to do.

How do we reverse course on this situation? Read more…

Coding in a cloud-based enterprise

Mapping the future of development by designing for distributed architectures.

departure-platform_620

With the advent of DevOps and various Platform-as-a-Service (PaaS) environments, many complex business requirements need to be met within a much shorter timeframe. The Internet of Things (IoT) is also changing how established applications and infrastructures are constructed. As a result of these converging trends, the enterprise IT landscape is becoming increasingly distributed, and the industry is starting to map how all the various components — from networking and middleware platforms, to ERP systems and microservices — will come together to create a new development paradigm that exists solely in the cloud.

Read more…

The cloud-native future

Moving beyond ad-hoc automation to take advantage of patterns that deliver predictable capabilities.

window_panes

Can you release new features to your customers every week? Every day? Every hour? Do new developers deploy code on their first day, or even during job interviews? Can you sleep soundly after a new hire’s deployment knowing your applications are all running perfectly fine? A rapid release cadence with the processes, tools, and culture that support the safe and reliable operation of cloud-native applications has become the key strategic factor for software-driven organizations who are shipping software faster with reduced risk. When you are able to release software more rapidly, you get a tighter feedback loop that allows you to respond more effectively to the needs of customers.

Continuous delivery is why software is becoming cloud-native: shipping software faster to reduce the time of your feedback loop. DevOps is how we approach the cultural and technical changes required to fully implement a cloud-native strategy. Microservices is the software architecture pattern used most successfully to expand your development and delivery operations and avoid slow, risky, monolithic deployment strategies. It’s difficult to succeed, for example, with a microservices strategy when you haven’t established a “fail fast” and “automate first” DevOps culture.

Continuous delivery, DevOps, and microservices describe the why, how, and what of being cloud-native. These competitive advantages are quickly becoming the ante to play the software game. In the most advanced expression of these concepts they are intertwined to the point of being inseparable. This is what it means to be cloud-native.

Read more…

Set up Kubernetes with a Docker compose one-liner

Start exploring Kubernetes with minimal effort.

wires

I had not looked at Kubernetes in over a month. It is a fast paced project so it is hard to keep up. If you have not looked at Kubernetes, it is roughly a cluster manager for containers. It takes a set of Docker hosts under management and schedules groups of containers in them. Kubernetes was open sourced by Google around June last year to bring all the Google knowledge of working with containers to us, a.k.a The people :) There are a lot of container schedulers or orchestrators if you wish out there, Citadel, Docker Swarm, Mesos with the Marathon framework, Cloud Foundry lattice etc. The Docker ecosystem is booming and our heads are spinning.

What I find very interesting with Kubernetes is the concept of replication controllers. Not only can you schedule groups of colocated containers together in a cluster, but you can also define replica sets. Say you have a container you want to scale up or down, you can define a replica controller and use it to resize the number of containers running. It is great for scaling when the load dictates it, but it is also great when you want to replace a container with a new image. Kubernetes also exposes a concept of services basically a way to expose a container application to all the hosts in your cluster as if it were running locally. Think the ambassador pattern of the early Docker days but on steroids.

Read more…

Four short links: 23 June 2015

Four short links: 23 June 2015

Irregular Periodicity, Facebook Beacons, Industry 4.0, and Universal Container

  1. Fast Lomb-Scargle Periodograms in Pythona classic method for finding periodicity in irregularly-sampled data.
  2. Facebook Bluetooth Beacons — free for you to use and help people see more information about your business whenever they use Facebook during their visit.
  3. Industry 4.0 — stop gagging at the term. Interesting examples of connectivity and data improving manufacturing. Human-machine interfaces: Logistics company Knapp AG developed a picking technology using augmented reality. Pickers wear a headset that presents vital information on a see-through display, helping them locate items more quickly and precisely. And with both hands free, they can build stronger and more efficient pallets, with fragile items safeguarded. An integrated camera captures serial and lot ID numbers for real-time stock tracking. Error rates are down by 40%, among many other benefits. Digital-to-physical transfer: Local Motors builds cars almost entirely through 3-D printing, with a design crowdsourced from an online community. It can build a new model from scratch in a year, far less than the industry average of six. Vauxhall and GM, among others, still bend a lot of metal, but also use 3-D printing and rapid prototyping to minimize their time to market. (via Quartz)
  4. runCa lightweight universal runtime container, by the Open Container Project. (OCP = multi-vendor initiative in hands of Linux Foundation)
Four short links: 10 June 2015

Four short links: 10 June 2015

Product Sins, Container Satire, Dong Detection, and Evolving Code Designs

  1. The 11 Deadly Sins of Product Development (O’Reilly Radar) — they’re traps that are easy to fall into.
  2. It’s the Future — satire, but like all good satire it’s built on a rich vein of truth. Genuine guffaw funny, but Caution: Contains Rude Words.
  3. Difficulty of Dong Detection — accessible piece about how automated “inappropriate” detection remains elusive. (via Mind Hacks)
  4. Evolution of Code Design at Facebook — you may not have Facebook-scale scale problems, but if you’re having scale problems then Facebook’s evolution (not just their solutions) will interest you.

Applied DevOps and the potential of Docker

The cultural impact within a software engineering organization can be dramatic.

Editor’s note: this post is from Karl Matthias and Sean P. Kane, authors of “Docker Up & Running,” a guide to quickly learn how to use Docker to create packaged images for easy management, testing, and deployment of software.

At the Python Developers Conference in Santa Clara, California, on March 15th, 2013, with no pre-announcement and little fanfare, Solomon Hykes, the founder and CEO of dotCloud, gave a 5-minute lightning talk where he first introduced the world to a brand new tool for Linux called Docker. It was a response to the hardships of shipping software at scale in a fast-paced world, and takes an approach that makes it easy to map organizational processes to the principles of DevOps.

The capabilities of the typical software engineering company have often not kept pace with the quickly evolving expectations of the average technology user. Users today expect fast, reliable systems with continuous improvements, ease of use, and broad integrations. Many in the industry see the principles of DevOps as a giant leap toward building organizations that meet the challenges of delivering high quality software in today’s market. Docker is aimed at these challenges.

Read more…

Four short links: 19 May 2015

Four short links: 19 May 2015

Wrist Interactions, Kubernetes Open Source Success, Product Quality, and Value of Privacy

  1. Android Wear vs Apple Watch (Luke Wroblewski) — comparison of interactions and experiences.
  2. Eric Brewer on Kubernetes — interesting not only for insights into Google’s efforts around Kubernetes but for: There’s so much excitement we can hardly handle all the pull requests. I think we’re committing, based on the GitHub log, something like 40 per day right now, and the demand is higher than that. Each of those takes reviews and, of course, there’s a wide variety of quality on those. Some are easy to review and some are quite hard to review. It’s a success problem, and we’re happy to have it. We did scale up the team to try and improve its velocity, but also just improve our ability to interact with all of the open source world that legitimately wants to contribute and has a lot to contribute. I’m very excited that the velocity is here, but it’s moving so fast it’s hard to even know all the things that change day to day. Makes a welcome change from the code dumps that are some of Google’s other high-profile projects.
  3. We Don’t Sell Saddles Here — Stewart Butterfield, to his team, on product development and quality. Every word of this is true for every other product, too.
  4. What is Privacy Worth? (PDF) — When endowed with the $10 untrackable card, 60.0% of subjects claimed they would keep it; however, when endowed with the $12 trackable card only 33.3% of subjects claimed they would switch to the untrackable card. […] This research raises doubts about individuals’ abilities to rationally navigate issues of privacy. From choosing whether or not to join a grocery loyalty program, to posting embarrassing personal information on a public website, individuals constantly make privacy-relevant decisions which impact their well-being. The finding that non-normative factors powerfully influence individual privacy valuations may signal the appropriateness of policy interventions.
Four short links: 7 May 2015

Four short links: 7 May 2015

Predicting Hits, Pricing Strategies, Quis Calculiet Shifty Custodes, Docker Security

  1. Predicting a Billboard Music Hit (YouTube) — Shazam VP of Music and Platforms at Strata London. With relative accuracy, we can predict 33 days out what song will go to No. 1 on the Billboard charts in the U.S.
  2. Psychological Pricing Strategies — a handy wrap-up of evil^wuseful pricing strategies to know.
  3. What Two Programmers Have Revealed So Far About Seattle Police Officers Who Are Still in Uniformthrough their shrewd use of Washington’s Public Records Act, the two Seattle residents are now the closest thing the city has to a civilian police-oversight board. In the last year and a half, they have acquired hundreds of reports, videos, and 911 calls related to the Seattle Police Department’s internal investigations of officer misconduct between 2010 and 2013. And though they have only combed through a small portion of the data, they say they have found several instances of officers appearing to lie, use racist language, and use excessive force—with no consequences. In fact, they believe that the Office of Professional Accountability (OPA) has systematically “run interference” for cops. In the aforementioned cases of alleged officer misconduct, all of the involved officers were exonerated and still remain on the force.
  4. Understanding Docker Security and Best Practices — explanation of container security and a benchmark for security practices, though email addresses will need to be surrendered in exchange for the good info.