The 13 principles in the U.S. CIO's Digital Services Playbook are applicable for everyone.
Whenever I hear someone say that “government should be run like a business,” my first reaction is “do you know how badly most businesses are run?” Seriously. I do not want my government to run like a business — whether it’s like the local restaurants that pop up and die like wildflowers, or megacorporations that sell broken products, whether financial, automotive, or otherwise.
If you read some elements of the press, it’s easy to think that healthcare.gov is the first time that a website failed. And it’s easy to forget that a large non-government website was failing, in surprisingly similar ways, at roughly the same time. I’m talking about the Common App site, the site high school seniors use to apply to most colleges in the US. There were problems with pasting in essays, problems with accepting payments, problems with the app mysteriously hanging for hours, and more.
Synthetic biology surely can get weirder — but this is a great start.
If you’ve ever tried any of the various vegan cheese substitutes, they are (to put it kindly) awful. The missing ingredient in these products is the milk proteins, or caseins. And of course you can’t use real milk proteins in a vegan product.
But proteins are just organic compounds that are produced, in abundance, by any living cell. And synthetic biology is about engineering cell DNA to produce whatever proteins we want. That’s the central idea behind the Real Vegan Cheese project: can we design yeast to produce the caseins we need for cheese, without involving any animals? There’s no reason we can’t. Once we have the milk proteins, we can use traditional processes to make the cheese. No cows (or sheep, or goats) involved, just genetically modified yeast. And you never eat the yeast; they stay behind at the brewery.
Joe Schloendorn is creating and distributing plasmids that can freely be reproduced — a huge breakthrough for DIY bio.
At O’Reilly, we’ve long been supporters of the open source movement — perhaps not with the religious fervor of some, but with a deep appreciation for how open source has transformed the computing industry over the last three decades.
We also have a deep appreciation for the dangers that closed source, restrictive licenses, patent trolling, and other technocratic evils pose to areas that are just opening up — biology, in particular. So it is with great interest that I read Open Source Biotech Consumables in the latest issue of BioCoder.
I’m not going to rehash the article; you should read it yourself. The basic argument is that some proteins used in research cost thousands of dollars per milligram. They’re easily reproducible (we’re talking DNA, after all), but frequently tied up with restrictive licenses. In addition, many of the vendors will only sell to research institutions and large corporations, not home labs or small community labs. So, Joe Schloendorn is creating and distributing plasmids that can freely be reproduced. That in itself is a huge breakthrough.
Inside this issue: implanting evolution, open source biotech consumables, power supplies for systems biology, and more.
We’ve made it to our fourth issue of BioCoder! I’m excited about this issue — it’s the best collection of articles we’ve published so far.
Some of the highlights are:
- Implanting Evolution:
- We spend a lot of time thinking about how to modify other creatures, from microbes on up. What about ourselves? Surgeons already implant pacemakers and insulin pumps into humans. What about other applications? What are the possibilities if you implant NFC and RFID chips?
- Open Source Biotech Consumables:
- One of the biggest problems for grassroots biotech research is the price of ingredients. Some proteins cost thousands of dollars per milligram, hardly affordable by a community lab or a small startup. We can solve that problem with “open source” DNA. This is an exciting development — and a challenge to what we mean by “open source” (I promise to write about that in another post).
If all companies are software companies, then all companies must learn to manage their online operations.
Two years ago, I wrote What is DevOps. Although that article was good for its time, our understanding of organizational behavior, and its relationship to the operation of complex systems, has grown.
A few themes have become apparent in the two years since that last article. They were latent in that article, I think, but now we’re in a position to call them out explicitly. It’s always easy to think of DevOps (or of any software industry paradigm) in terms of the tools you use; in particular, it’s very easy to think that if you use Chef or Puppet for automated configuration, Jenkins for continuous integration, and some cloud provider for on-demand server power, that you’re doing DevOps. But DevOps isn’t about tools; it’s about culture, and it extends far beyond the cubicles of developers and operators. As Jeff Sussna says in Empathy: The Essence of DevOps:
…it’s not about making developers and sysadmins report to the same VP. It’s not about automating all your configuration procedures. It’s not about tipping up a Jenkins server, or running your applications in the cloud, or releasing your code on Github. It’s not even about letting your developers deploy their code to a PaaS. The true essence of DevOps is empathy.
Think your IT staff can protect you better than major cloud providers? Think again.
I just ran across Katie Fehrenbacher’s article in GigaOm that made a point I’ve been arguing (perhaps not strongly enough) for years. When you start talking to people about “the cloud,” you frequently run into a knee-jerk reaction: “Of course, the cloud isn’t secure.”
I have no idea what IT professionals who say stuff like this mean. Are they thinking about the stuff they post on Facebook? Or are they thinking about the data they’ve stored on Amazon? For me, the bottom line is: would I rather trust Amazon’s security staff, or would I rather trust some guy with some security cert that I’ve never heard of, but whom the HR department says is “qualified”? Read more…
All systems are distributed systems, and we’re starting to see how they fit into Velocity's themes.
From the beginning, the Velocity Conference has focused on web performance and operations — specifically, web operations. This focus has been fairly narrow: browser performance dominated the discussion of “web performance,” and interactions between developers and IT staff dominated operations.
That’s not to say that Velocity hasn’t looked at the rest of the application stack; there’s been an occasional glance in the direction of the database and an even more occasional glance at the middleware. But the database and middleware have, at least historically, played a bit part. And while the focus of Velocity has been front-end tuning, speakers like Baron Schwartz haven’t let us ignore the database entirely. Read more…
The tools in the Distributed Developer's Stack make development manageable in a highly distributed environment.
The shape of software development has changed radically in the last two decades. We’ve seen many changes: the Internet, the web, virtualization, and cloud computing. All of these changes point toward a fundamental new reality: all computing has become distributed computing. The age of standalone applications has disappeared, and applications that run on a single computer are almost inconceivable. Distributed is the default; and whether an application is running on Amazon Web Services (AWS), on a private cloud, or even on a desktop or a mobile phone, it depends on the behavior of other systems and services that aren’t under the developer’s control.
In the past few years, a new toolset has grown up to support the development of massively distributed applications. We call this new toolset the Distributed Developer’s Stack (DDS). It is orthogonal to the more traditional world of servers, frameworks, and operating systems; it isn’t a replacement for the aged LAMP stack, but a set of tools to make development manageable in a highly distributed environment.
The DDS is more of a meta-stack than a “stack” in the traditional sense. It’s not prescriptive; we don’t care whether you use AWS or OpenStack, whether you use Git or Mercurial. We do care that you develop for the cloud, and that you use a distributed version control system. The DDS is about the requirements for working effectively in the second decade of the 21st century. The specific tools have evolved, and will continue to evolve, and we expect you to evolve, too. Read more…
Self-driving cars will make decisions — and act — faster than humans facing the same dangerous situations.
Frankly, I’m already tired of the discussion. It’s not as if humans don’t already get into situations like this, and make (or not make) decisions. At least, I have. Read more…
All trust is misplaced. And that's probably the way it should be.
In the wake of Heartbleed, there’s been a chorus of “you can’t trust open source! We knew it all along.”
It’s amazing how short memories are. They’ve already forgotten Apple’s GOTO FAIL bug, and their sloppy rollout of patches. They’ve also evidently forgotten weaknesses intentionally inserted into commercial security products at the request of certain government agencies. It may be more excusable that they’ve forgotten hundreds, if not thousands, of Microsoft vulnerabilities over the years, many of which continue to do significant harm.
Yes, we should all be a bit spooked by Heartbleed. I would be the last person to argue that open source software is flawless. As Eric Raymond said, “With enough eyes, all bugs are shallow,” and Heartbleed was certainly shallow enough, once those eyes saw it. Shallow, but hardly inconsequential. And even enough eyes can have trouble finding bugs in a rat’s nest of poorly maintained code. The Core Infrastructure Initiative, which promises to provide better funding (and better scrutiny) for mission-critical projects such as OpenSSL, is a step forward, but it’s not a magic bullet that will make vulnerabilities go away.