"risk" entries

Tim O’Reilly Urges Developers and Entrepreneurs to Make Moonshots

OSCON Mainstage Talks: Create more value than you capture

At OSCON 2013, Tim (@timoreilly) asked us to aim higher and work on difficult problems while highlighting the most important trends that should be guiding open source developers and entrepreneurs. To illustrate his points, he offered up great examples from companies as diverse as Google, Square, Wikipedia, and O’Reilly Media.
Read more…

What’s the Risk of Account Takeover?

Early detection and prevention

Online payments and eCommerce have been targets for fraud ever since their inception. The availability of real monetary value coupled with the ability to scale an attack online attracted many users to fraud in order to make a quick buck. At first, fraudsters used stolen credit card details to make purchases online. As services became more widely used, a newer, sometimes easier alternative emerged: account takeover.

Account takeover (ATO) occurs when one user guesses, or has been given, the credentials to another’s value storing account. This can be your online wallet, but also your social networking profile or gaming account. The perpetrator is often someone you don’t know, but it can just as easily be your kid using an account you didn’t log out of. All fall under various flavors of ATO, and are much easier than stealing one’s identity; all that’s needed is guessing or phishing a user’s credentials and you’re rewarded with all the value they’ve been able to create through their activity.

Read more…

Passionate Programmers, Bitcoin APIs, WebOps Risk Management, and @WarrenBuffet

Weekly highlights and insights: May 27-31

A Commencement Speech for 2013 CS Majors: Being passionate about software is critical to success.

Bitcoin as a money platform: Andreas Antonopoulos outlines Bitcoin network’s three distinct APIs.

What Is the Risk That Amazon Will Go Down (Again)?: Johan Bergström explores WebOps risk management.

Where’s @WarrenBuffet?: Social media pull does not equate to a sound investment strategy.

What Is the Risk That Amazon Will Go Down (Again)?

Velocity 2013 Speaker Series

Why should we at all bother about notions such as risk and safety in web operations? Do web operations face risk? Do web operations manage risk? Do web operations produce risk? Last Christmas Eve, Amazon had an AWS outage affecting a variety of actors, including Netflix, which was a service included in many of the gifts shared on that very day. The event has introduced the notion of risk into the discourse of web operations, and it might then be good timing for some reflective thoughts on the very nature of risk in this domain.

What is risk? The question is a classic one, and the answer is tightly coupled to how one views the nature of the incident occurring as a result of the risk.

One approach to assessing the risk of Amazon going down is probabilistic: start by laying out the entire space of potential scenarios leading to Amazon going down, calculate their probability, and multiply the probability for each scenario by their estimated severity (likely in terms of the costs connected to the specific scenario depending on the time of the event). Each scenario can then be plotted in a risk matrix showing their weighted ranking (to prioritize future risk mitigation measures) or calculated as a collective sum of the risks for each scenario (to judge whether the risk for Amazon going down is below a certain acceptance criterion).

This first way of answering the question of what the risk is for Amazon to go down is intimately linked with a perception of risk as energy to be kept contained (Haddon, 1980). This view originates from more recent times of increased development of process industries in which clearly graspable energies (fuel rods at nuclear plants, the fossil fuels at refineries, the kinetic energy of an aircraft) are to be kept contained and safely separated from a vulnerable target such as human beings. The next question of importance here becomes how to avoid an uncontrolled release of the contained energy. The strategies for mitigating the risk of an uncontrolled release of energy are basically two: barriers and redundancy (and the two combined: redundancy of barriers). Physically graspable energies can be contained through the use of multiple barriers (called “defenses in depth”) and potentially several barriers of the same kind (redundancy), for instance several emergency-cooling systems for a nuclear plant.

Using this metaphor, the risk of Amazon going down is mitigated by building a system of redundant barriers (several server centers, backup, active fire extinguishing, etc.). This might seem like a tidy solution, but here we run into two problems with this probabilistic approach to risk: the view of the human operating the system and the increased complexity that comes as a result of introducing more and more barriers.

Controlling risk by analyzing the complete space of possible (and graspable) scenarios basically does not distinguish between safety and reliability. From this view, a system is safe when it is reliable, and the reliability of each barrier can be calculated. However there is one system component that is more difficult to grasp in terms of reliability than any other: the human. Inevitably, proponents of the energy/barrier model of risk end up explaining incidents (typically accidents) in terms of unreliable human beings not guaranteeing the safety (reliability) of the inherently safe (risk controlled by reliable barriers) system. I think this problem—which has its own entire literature connected to it—is too big to outline in further detail in this blog post, but let me point you towards a few references: Dekker, 2005; Dekker, 2006; Woods, Dekker, Cook, Johannesen & Sarter, 2009. The only issue is these (and most other citations in this post) are all academic tomes, so for those who would prefer a shorter summary available online, I can refer you to this report. I can also reassure you that I will get back to this issue in my keynote speech at the Velocity conference next month. To put the critique short: the contemporary literature questions the view of humans as the unreliable component of inherently safe systems, and instead advocates a view of humans as the only ones guaranteeing safety in inherently complex and risky environments.
Read more…