Boost your career with new levels of automation

Elevate automation through orchestration.


As sysadmins we have been responsible for running applications for decades. We have done everything to meet demanding SLAs including “automating all the things” and even trading sleep cycles to recuse applications from production fires. While we have earned many battle scars and can step back and admire fully automated deployment pipelines, it feels like there has always been something missing. Our infrastructure still feels like an accident waiting to happen and somehow, no matter how much we manage to automate, the expense of infrastructure continues to increase.

The root of this feeling comes from the fact that many of our tools don’t provide the proper insight into what’s really going on and require us to reverse engineer applications in order to effectively monitor them and recover from failures. Today many people bolt on monitoring solutions that attempt to probe applications from the outside and report “health” status to a centralized monitoring system, which seems to be riddled with false alarms or a list of alarms that are not worth looking into because there is no clear path to resolution.

What makes this worse is how we typically handle common failure scenarios such as node failures. Today many of us are forced to statically assign applications to machines and manage resource allocations on a spreadsheet. It’s very common to assign a single application to a VM to avoid dependency conflicts and ensure proper resource allocations. Many of the tools in our tool belt have be optimized for this pattern and the results are less than optimal. Sure this is better than doing it manually, but current methods are resulting in low resource utilization, which means our EC2 bills continue to increase — because the more you automate, the more things people want to do.

How do we reverse course on this situation? Read more…

Four short links: 7 October 2015

Four short links: 7 October 2015

Time for Change, Face Recognition, Correct Monitoring, and Surveillance Infrastructure

  1. The Uncertain Future of Emotion AnalyticsA year before the launch of the first mass-produced personal computer, British academic David Collingridge wrote in his book “The Social Control of Technology” that “when change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult, and time consuming.”
  2. Automatic Face Recognition (Bruce Schneier) — Without meaningful regulation, we’re moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.
  3. Really Monitoring Your SystemsIf you are not measuring and showing the maximum value, then you are hiding something. The number one indicator you should never get rid of is the maximum value. That’s not noise — it’s the signal; the rest is noise.
  4. Haunted by Data (Maciej Ceglowski) — You can’t just set up an elaborate surveillance infrastructure and then decide to ignore it. These data pipelines take on an institutional life of their own, and it doesn’t help that people speak of the “data-driven organization” with the same religious fervor as a “Christ-centered life.”

How virtual reality can make the real world a better place

Can VR become “the ultimate empathy machine?”

Early_flight_02561u_(2)Virtual reality (VR) can make the impossible possible — the rules of physical reality need no longer apply. In VR, you strap on a special headset and leave the real world behind to enter a virtual one. You can fly like a bird high above Manhattan, or experience the feeling of weightlessness as an astronaut on a spaceship.

VR is reliant upon the illusion of being deeply engrossed in another space and time, far away from your current reality. In a split second you can travel to exotic locales or be on stage at a concert with your favourite musician. Gaming and entertainment are natural fits for VR experiences. A startup called The Void plans to open a set of immersive virtual reality theme parks called Virtual Entertainment Centers, with the first one opening in Pleasant Grove, Utah by June 2016.

This is an exciting time for developers and designers to be defining VR as a new experience medium. However, as the technology improves and consumer hardware and content become available in VR, we must ask: how can this new technology be applied to benefit humanity?

As it turns out, this question is being explored on a few early fronts. For example, SnowWorld, developed at the University of Washington Human Interface Technology (HIT) Lab in 1996 by Hunter Hoffman and David Patterson, was the first immersive VR world designed to reduce pain in adults and children. SnowWorld was specifically developed to help burn patients during wound care. Read more…


Buddy Michini on commercial drones

The O’Reilly Solid Podcast: Drone safety, trust, and real-time data analysis.


Subscribe to the O’Reilly Solid Podcast for insight and analysis about the Internet of Things and the worlds of hardware, software, and manufacturing.

In our new episode of the Solid Podcast, we talk with Buddy Michini, CTO of Airware, which makes a platform for commercial drones. We cover some potentially game-changing research in localization and mapping, and onboard computational abilities that might eventually make it possible for drones to improve their flight intelligence by analyzing their imagery in real time.

Among the general public, the best-understood use case for drones is package delivery, which obscures many other promising applications (and perhaps threatens to become the Internet-connected refrigerator of autonomous aircraft). There’s also widespread (and understandable) fear of drones. “We need to make drones do things to improve our lives and our world,” Buddy says. “That will get people to accept drones into their lives a little bit more.” Read more…

Comment: 1

Transforming the experience of sound and music

The O'Reilly Radar Podcast: Poppy Crum on sensory perception, algorithm design, and fundamental changes in music.

Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.


In this week’s Radar Podcast, author and entrepreneur Alistair Croll, who also co-chairs our Strata + Hadoop World conference, talks music science with Poppy Crum, senior principal scientist at Dolby Laboratories and a consulting professor at Stanford.

Their wide-ranging discussion covers fundamental changes in the music industry, sensory perception and algorithm design, and what the future of music might look like.

Here are a few snippets from their conversation:

As we see transformations to the next stage of how we consume content, things that are becoming very prevalent are more and more metadata. More and more information about the sounds, information about personalization. You aren’t given the answer; you’re given information and opportunities to have a closer tie to the artist’s intent because more information about the artist’s intent can be captured so that when you actually experience the sound or the music, that information is there to dictate how it deals with your personal environment.

Today, Dolby Atmos and other technologies have transformed [how we experience sound in the cinema] quite substantially, where if I’m a mixer, I can take a sound and can mix now, say, instead of seven channels, I can mix 128 sounds, and each one of those sounds has a data stream associated with it. That data stream carries information. It’s not going to a particular set of speakers; it has x, y, z coordinates, it has information about the diffusivity of that sound. Read more…


Contrasting architecture patterns with design patterns

How both kinds of patterns can add clarity and understanding to your project.

Developers are accustomed to design patterns, as popularized in the book Design Patterns by Gamma, et al. Each pattern describes a common problem posed in object-oriented software development along with a solution, visualized via class diagrams. In the Software Architecture Fundamentals workshop, Mark Richards & I discuss a variety of architecture patterns, such as Layered, Micro-Kernel, SOA, etc. However, architecture patterns differ from design patterns in several important ways.

Components rather than classes

Architectural elements tend towards collections of classes or modules, generally represented as boxes. Diagrams about architecture represent the loftiest level looking down, whereas class diagrams are at the most atomic level. The purpose of architecture patterns is to understand how the major parts of the system fit together, how messages and data flow through the system, and other structural concerns.

Architecture diagrams tend to be less rigidly defined than class diagrams. For example, many times the purpose of the diagram is to show one aspect of the system, and simple iconography works best. For example, one aspect of the Layered architecture pattern is whether the layers are closed (only accessible from the superior layer) or open (allowed to bypass the layer if no value added), as shown in Figure 1.

Figure 1: Layered architecture with mixed closed and open layers

Figure 1: Layered architecture with mixed closed and open layers

This feature of the architecture isn’t the most important part, but is important to call out because if affects the efficacy of this pattern. For example, if developers violate this principle (e.g., performing queries from the presentation layer directly to the data layer), it compromises the separation of concerns and layer isolation that are the prime benefits of this pattern. Often an architectural pattern consists of several diagrams, each showing an important dimension.

Read more…