The risk relative to the savings isn’t enough to justify a shift to public cloud.
This post was originally published on Limn This. The lightly edited version that follows is republished with permission.
Last October, Simon Wardley and I stood on a rainy sidewalk at 28th St. in New York City arguing politely (he’s British) about the future of cloud adoption. He argued, rightly, that the cost advantages from scale would be overwhelming compared to home-brew private clouds. He went on to argue, less certainly in my view, that this would lead inevitably to their wholesale and deep adoption across the enterprise market.
I think Simon bases his argument on something like the rational economic man theory of the enterprise. Or, more specifically, the rational economic chief financial officer (CFO). If the costs of a service provider are destined to be lower than the costs of internally operated alternatives, and your CFO is rational (most tend to be), then the conclusion is foregone.
And, of course, costs are going down just as they are predicted to. Look at this post by Avi Deitcher: Does Amazon’s Web Services Pricing Follow Moore’s Law? I think the question posed in the title has a fairly obvious answer. No. Services aren’t just silicon; they include all manner of linear terms, like labor, so the price decreases will almost certainly be slower than Moore’s Law, but his analysis of the costs of a modestly-sized AWS solution and in-house competition is really useful.
Not only is AWS’ price dropping fast (56% in three years), but it’s significantly cheaper than building and operating a platform in house. Avi does the math for 600 instances over three years and finds that the cost for AWS would be $1.1 million (I don’t think this number considers out-year price decreases) versus $2.3 million for DIY. Your mileage might vary, but these numbers are a nice starting point for further discussion.
The collision of software and hardware has broken down the barriers between the digital and physical worlds.
Note: this post is a slightly hydrated version of my Solid keynote. To get it out in 10 minutes, I had to remove a few ideas and streamline it a bit for oral delivery; this is the full version.
In 1995, Nicolas Negroponte told us to forget about the atoms and focus on the bits. I think “being digital” was probably an intentional overstatement, a provocation to shove our thinking off of its metastable emphasis on the physical, to open us up to the power of the purely digital. And maybe it worked too well, because a lot of us spent two decades plumbing every possibility of digital-only technologies and digital-only businesses.
By then, technology had bifurcated into two streams of hardware and software that rarely converged outside of the data center, and for most of us, unless we were with a firm the size of Sony, with a huge addressable market, hardware was simply outside the scope of our entrepreneurial ambitions. It was our platform, but rarely our product. The physical world was for other people to worry about. We had become by then the engineers of the ephemeral, the plastic, and the immaterial. And in the depth of our immersion into the virtual and digital, we became, it seems, citizens of Weblandia (and congregants of the Church of Disruption).
But pendulums always swing back. Read more…
When the mining subsidies end, will the bitcoin network centralize into a bank?
Radar has a backchannel, and sometimes we have interesting conversations on it. Mike Loukides and I recently had a long chat about bitcoin. Both of us were thinking out loud and learning as we went along, and on re-reading the thread I’m astonished by our advanced level of ignorance. I would like to publish it because it hints at just how hard it is to understand the bitcoin network. The founding papers that describe the system leave a lot of implementation to the imagination, and the level of mis(dis?)information around the web is staggering. It’s no small thing to get the basics right. But beyond the basics, the bitcoin network has that property of an inside-out onion, where the harder you look, the more (and bigger slices of) complexity you find.
Anyway, we’re not going to publish it. I don’t mind looking stupid, but I don’t want to look that stupid — also, the comments would be torture.
However, some of the things we were wondering about are worth wondering about publicly. Especially this: what happens when the mining subsidies end? Will transaction fees pick up the slack? I think ultimately the answer is yes, but maybe not in the way a lot of people expect. Read more…
From the Internet to the Internet of Everything to just plain Everything.
I started writing this post to respond to the question: “What is the difference between machine-to-machine (M2M) and the Internet of Things (IoT)?” It turns out, a post answering that question isn’t really necessary. There is already a pretty good thread on Quora that answers it.
However, with the emphasis on the technologies at play, most of the answers on Quora left me a little flat. I guess it’s because, while they are correct, they tend to focus on the details and miss the big picture. They say things like, “M2M is the plumbing and IoT is the application,” or M2M is about SMS and general packet radio service (GPRS), while IoT is about the IP stack. Or, essentially, that M2M is freighted with telecom transport-layer heritage (baggage?), while the IoT is emerging out of the upper layers of the Internet’s IP stack — which would be great except for the fact that it’s not always true. Plenty of IoT devices operate with other-than-IP protocol stacks via gateways.
I think the distinction between M2M and IoT isn’t all that important with regard to the technology stacks they employ. What’s more interesting to me is that the change in language suggests a transition. It’s a signpost plunked down in the middle of an otherwise smooth continuum, where enough of us have noticed something happening to make a name for it. We used to argue about what Web 2.0 meant; now we argue about what IoT means. Regardless of what the term “Internet of Things” actually means, its growing use represents a conceptual point of departure from what came before. Something new is happening, and we are using different words to signify it. Read more…
The IoT isn't just a new attack surface to get into your enterprise — it's giving the Internet eyes and arms.
Your computer is important. It has access to your Amazon account, probably your bank, your tax returns, and maybe even your medical records. It’s scary when it gets pwnd, and it gets pwned regularly because it’s essentially impossible to fully secure a general purpose computing device. But the good news is that, at least for now, your computer can’t climb up the stairs and bludgeon you to death in your sleep. The things it manipulates are important to you, but they are (mostly) contained in the abstract virtual realm of money and likes.
The Internet of Things is different. We are embarking on an era where the things we own will be as vulnerable as our PCs, but now they interact with the real world via sensors and actuators. They have eyes and arms, and some of them in the not-too-distant future really will be able to climb the stairs and punch you in the face.
This piece from the New York Times has been getting some attention because it highlights how smart things represent an increased attack surface for infiltration. It views smart devices as springboards into an enterprise rather than the object of the attack, and that will certainly be true in many cases. Read more…
Jim Stogdill, Jon Bruner, and Mike Loukides chat about personalizing all the things.
This week in our Radar podcast, Jon and I both had colds. You’ll be pleased to know that I edited out all the sneezes, coughs, and general upper respiratory mayhem, but unfortunately there is no Audacity filter for a voice that sounds like a frog caught in a mouse trap (mine). If that hasn’t dissuaded you from listening, we covered some things that were really interesting, at least to us.
Here are some links to things you’ll hear in this episode:
Blackberry’s salvation may reside in its QNX embedded systems division.
The Pennsylvania Railroad was an amazing technical organization in its heyday. Railroads were that time’s web, and Pennsylvania was its Google. It created a lot of the practices we still use today for testing and other technical disciplines. Also, I suppose if Atlas were to shrug today (shudder) John Galt would be a data center designer. Read more…
Are we finally seeing connected vehicles doing more with the connection than infotainment?
A few months ago, I rented a Toyota Prius and was driving it up the 101 when, predictably, I ran into a long stretch of mostly-stop-with-some-go traffic. I remember thinking at the time, “It’s too bad this thing didn’t see this traffic jam coming; it could have topped off the battery and I could be motoring through this bumper-to-bumper on much more efficient electric drive.” Instead, I entered the traffic with the battery relatively depleted and ended up running the engine a bunch even though I was only going 5 mph.
Then last week, I was getting my car worked on and saw this sign in the waiting room:
That was cool because it was the first time I had seen an auto manufacturer (in this case, Mini) using externally obtained data to actually improve how the car operated instead of using it for some lame in-dash “experience.” It got me thinking. Read more…
Building great software on time is at the heart of more and more "hardware" projects.
I think what’s most interesting about this story is that when we look at an airplane we tend to see a physical thing. We see airfoils, materials, and hard sciences animated and airborne, but a growing proportion of the “thingness” of these machines is happening in software — software that makes it fly and software that connects it with all the other things on the battlefield, to share information and fight as one organism.
This airplane will require approximately eight million lines of code on board to run mission systems and flight sciences. I’m guessing flight sciences code will be the same for the U.S. and its partner / buyers, but I’m not sure. Given that the aircraft is flying but not operational, one could hazard a guess that the flight sciences code is coming along faster than the mission stuff with all its complex real-time target fusion stuff going on. And the mission code is the really interesting part. It’s what makes a single aircraft part of a bigger whole. It’s analogous to what makes the Nest more than just your typical thermostat, but much, much more. Read more…
A Twitter Q&A follow-up to my conversation with Tim O'Reilly.
Last week, Tim O’Reilly and I sat down in San Francisco and had a conversation about the collision of hardware and software. The fact that digital entrepreneurs see hardware as part of their available palette now is really interesting, as is the way many companies with traditional manufacturing roots are seeing digitization and software as key parts of their businesses in the near future. Software plus more malleable hardware is like a whole new medium for building products and services. We really are on the cusp of interesting times.
As our time wound down, questions were still coming in via Twitter. Since we couldn’t get to all of them during the time allotted, I thought I’d try to respond to a few more of them here. Read more…
Doug Hill, James Bessen and Jim Stogdill continue discussing the impact of automation.
Editor’s note: Doug Hill and I recently had a conversation here on Radar about the impact of automation on jobs. In one of our exchanges, Doug mentioned a piece by James Bessen. James reached out to me and was kind enough to provide a response. What follows is their exchange.
JAMES BESSEN: I agree, Doug, that we cannot dismiss the concerns that technology might cause massive unemployment just because technology did not do this in the past. However, in the past, people also predicted that machines would cause mass unemployment. It might be helpful to understand why this didn’t happen in the past and to ask if anything today is fundamentally different.
Many people make a simple argument: 1) they observe that machines can perform job tasks and 2) they conclude that therefore humans will lose their jobs. I argued that this logic is too simple. During the 19th century, machines took more than 98% of the labor needed to weave a yard of cloth. But the number of weavers actually grew because the lower price of cloth increased demand. This is why, contrary to Marx, machines did not create mass unemployment. Read more…