Mike Loukides

Mike Loukides is Vice President of Content Strategy for O'Reilly Media, Inc. He's edited many highly regarded books on technical subjects that don't involve Windows programming. He's particularly interested in programming languages, Unix and what passes for Unix these days, and system and network administration. Mike is the author of System Performance Tuning", and a coauthor of "Unix Power Tools." Most recently, he's been fooling around with data and data analysis, languages like R, Mathematica, and Octave, and thinking about how to make books social.

Designing real vegan cheese

Synthetic biology surely can get weirder — but this is a great start.

real_vegan_cheese_screenshot

I don’t think I will ever get tired of quoting Drew Endy’s “keep synthetic biology weird.” One of my favorite articles in the new issue of Biocoder is on the Real Vegan Cheese project.

If you’ve ever tried any of the various vegan cheese substitutes, they are (to put it kindly) awful. The missing ingredient in these products is the milk proteins, or caseins. And of course you can’t use real milk proteins in a vegan product.

But proteins are just organic compounds that are produced, in abundance, by any living cell. And synthetic biology is about engineering cell DNA to produce whatever proteins we want. That’s the central idea behind the Real Vegan Cheese project: can we design yeast to produce the caseins we need for cheese, without involving any animals? There’s no reason we can’t. Once we have the milk proteins, we can use traditional processes to make the cheese. No cows (or sheep, or goats) involved, just genetically modified yeast. And you never eat the yeast; they stay behind at the brewery.

Read more…

Comments: 5

Open source biology

Joe Schloendorn is creating and distributing plasmids that can freely be reproduced — a huge breakthrough for DIY bio.

Photo by mira66 on Flickr, used under a Creative Commons license.

At O’Reilly, we’ve long been supporters of the open source movement — perhaps not with the religious fervor of some, but with a deep appreciation for how open source has transformed the computing industry over the last three decades.

We also have a deep appreciation for the dangers that closed source, restrictive licenses, patent trolling, and other technocratic evils pose to areas that are just opening up — biology, in particular. So it is with great interest that I read Open Source Biotech Consumables in the latest issue of BioCoder.

I’m not going to rehash the article; you should read it yourself. The basic argument is that some proteins used in research cost thousands of dollars per milligram. They’re easily reproducible (we’re talking DNA, after all), but frequently tied up with restrictive licenses. In addition, many of the vendors will only sell to research institutions and large corporations, not home labs or small community labs. So, Joe Schloendorn is creating and distributing plasmids that can freely be reproduced. That in itself is a huge breakthrough.

Read more…

Comments: 4

Announcing BioCoder issue 4

Inside this issue: implanting evolution, open source biotech consumables, power supplies for systems biology, and more.

BioCoder4Cover 2

The Summer 2014 edition of BioCoder is now available for free download.

We’ve made it to our fourth issue of BioCoder! I’m excited about this issue — it’s the best collection of articles we’ve published so far.

Some of the highlights are:

Implanting Evolution:
We spend a lot of time thinking about how to modify other creatures, from microbes on up. What about ourselves? Surgeons already implant pacemakers and insulin pumps into humans. What about other applications? What are the possibilities if you implant NFC and RFID chips?
Open Source Biotech Consumables:
One of the biggest problems for grassroots biotech research is the price of ingredients. Some proteins cost thousands of dollars per milligram, hardly affordable by a community lab or a small startup. We can solve that problem with “open source” DNA. This is an exciting development — and a challenge to what we mean by “open source” (I promise to write about that in another post).

Read more…

Comment

Revisiting “What is DevOps”

If all companies are software companies, then all companies must learn to manage their online operations.

DevOpsBirds

Two years ago, I wrote What is DevOps. Although that article was good for its time, our understanding of organizational behavior, and its relationship to the operation of complex systems, has grown.

A few themes have become apparent in the two years since that last article. They were latent in that article, I think, but now we’re in a position to call them out explicitly. It’s always easy to think of DevOps (or of any software industry paradigm) in terms of the tools you use; in particular, it’s very easy to think that if you use Chef or Puppet for automated configuration, Jenkins for continuous integration, and some cloud provider for on-demand server power, that you’re doing DevOps. But DevOps isn’t about tools; it’s about culture, and it extends far beyond the cubicles of developers and operators. As Jeff Sussna says in Empathy: The Essence of DevOps:

…it’s not about making developers and sysadmins report to the same VP. It’s not about automating all your configuration procedures. It’s not about tipping up a Jenkins server, or running your applications in the cloud, or releasing your code on Github. It’s not even about letting your developers deploy their code to a PaaS. The true essence of DevOps is empathy.

Read more…

Comments: 4

Cloud security is not an oxymoron

Think your IT staff can protect you better than major cloud providers? Think again.

I just ran across Katie Fehrenbacher’s article in GigaOm that made a point I’ve been arguing (perhaps not strongly enough) for years. When you start talking to people about “the cloud,” you frequently run into a knee-jerk reaction: “Of course, the cloud isn’t secure.”

I have no idea what IT professionals who say stuff like this mean. Are they thinking about the stuff they post on Facebook? Or are they thinking about the data they’ve stored on Amazon? For me, the bottom line is: would I rather trust Amazon’s security staff, or would I rather trust some guy with some security cert that I’ve never heard of, but whom the HR department says is “qualified”? Read more…

Comments: 7

From the network interface to the database

All systems are distributed systems, and we’re starting to see how they fit into Velocity's themes.

Laser_Lighting_Ian_Barbour

From the beginning, the Velocity Conference has focused on web performance and operations — specifically, web operations. This focus has been fairly narrow: browser performance dominated the discussion of “web performance,” and interactions between developers and IT staff dominated operations.

These limits weren’t bad. Perceived performance really is dominated by the browser — how fast you can get resources (HTML, images, CSS files, JavaScript libraries) over the network to the browser, and how fast the browser can execute those resources. How long before a user stops waiting for your page to load and clicks away? How do you make a page useable as quickly as possible, even before all the resources have loaded? Those discussions were groundbreaking and surprising: users are incredibly sensitive to page speed.

That’s not to say that Velocity hasn’t looked at the rest of the application stack; there’s been an occasional glance in the direction of the database and an even more occasional glance at the middleware. But the database and middleware have, at least historically, played a bit part. And while the focus of Velocity has been front-end tuning, speakers like Baron Schwartz haven’t let us ignore the database entirely. Read more…

Comment: 1

Beyond the stack

The tools in the Distributed Developer's Stack make development manageable in a highly distributed environment.

Cairn at Garvera, Surselva, Graubuenden, Switzerland. The shape of software development has changed radically in the last two decades. We’ve seen many changes: the Internet, the web, virtualization, and cloud computing. All of these changes point toward a fundamental new reality: all computing has become distributed computing. The age of standalone applications has disappeared, and applications that run on a single computer are almost inconceivable. Distributed is the default; and whether an application is running on Amazon Web Services (AWS), on a private cloud, or even on a desktop or a mobile phone, it depends on the behavior of other systems and services that aren’t under the developer’s control.

In the past few years, a new toolset has grown up to support the development of massively distributed applications. We call this new toolset the Distributed Developer’s Stack (DDS). It is orthogonal to the more traditional world of servers, frameworks, and operating systems; it isn’t a replacement for the aged LAMP stack, but a set of tools to make development manageable in a highly distributed environment.

The DDS is more of a meta-stack than a “stack” in the traditional sense. It’s not prescriptive; we don’t care whether you use AWS or OpenStack, whether you use Git or Mercurial. We do care that you develop for the cloud, and that you use a distributed version control system. The DDS is about the requirements for working effectively in the second decade of the 21st century. The specific tools have evolved, and will continue to evolve, and we expect you to evolve, too. Read more…

Comment

Life, death, and autonomous vehicles

Self-driving cars will make decisions — and act — faster than humans facing the same dangerous situations.

1966PlymouthFuryIII

Plymouth Fury III. Photo by Infrogmation, on Wikimedia Commons.

There’s a steadily increasing drumbeat of articles and Tweets about the ethics of autonomous vehicles: if an autonomous vehicle is going to crash, should it kill the passenger in the left seat or the right seat? (I won’t say “driver’s seat,” though these sorts of articles usually do; there isn’t a driver.) Should the car crash into a school bus or run over an old lady on the side of the road?

Frankly, I’m already tired of the discussion. It’s not as if humans don’t already get into situations like this, and make (or not make) decisions. At least, I have. Read more…

Comments: 7

Heartbleed’s lessons

All trust is misplaced. And that's probably the way it should be.

In the wake of Heartbleed, there’s been a chorus of “you can’t trust open source! We knew it all along.”

It’s amazing how short memories are. They’ve already forgotten Apple’s GOTO FAIL bug, and their sloppy rollout of patches. They’ve also evidently forgotten weaknesses intentionally inserted into commercial security products at the request of certain government agencies. It may be more excusable that they’ve forgotten hundreds, if not thousands, of Microsoft vulnerabilities over the years, many of which continue to do significant harm.

Yes, we should all be a bit spooked by Heartbleed. I would be the last person to argue that open source software is flawless. As Eric Raymond said, “With enough eyes, all bugs are shallow,” and Heartbleed was certainly shallow enough, once those eyes saw it. Shallow, but hardly inconsequential. And even enough eyes can have trouble finding bugs in a rat’s nest of poorly maintained code. The Core Infrastructure Initiative, which promises to provide better funding (and better scrutiny) for mission-critical projects such as OpenSSL, is a step forward, but it’s not a magic bullet that will make vulnerabilities go away.

Read more…

Comments: 5

Robots in the lab

Hacking lab equipment to make it programmable is a good first step toward lab automation.

ModularScienceAutomatedCentrifuge

An automated centrifuge at Modular Science — click here for instructions to hack one yourself.

In the new issue of BioCoder, Peter Sand writes about Hacking Lab Equipment. It’s well worth a read: it gives a number of hints about how standard equipment can be modified so that it can be controlled by a program. This is an important trend I’ve been watching on a number of levels, from fully robotic labs to much more modest proposals, like Sand’s, that extend programmability even to hacker spaces and home labs.

In talking to biologists, I’m surprised at how little automation there is in research labs. Automation in industrial labs, the sort that process thousands of blood and urine samples per hour, yes: that exists. But in research labs, undergrads, grad students, and post-docs spend countless hours moving microscopic amounts of liquid from one place to another. Why? It’s not science; it’s just moving stuff around. What a waste of mental energy and creativity.

Lab automation, though, isn’t just about replacing countless hours of tedium with opportunities for creative thought. I once talked to a system administrator who wrote a script for everything, even for only a simple one-liner. (Might have been @yesthattom, I don’t remember.) This practice is based on an important insight: writing a script documents exactly what you did. You don’t have to think about, “oh, did I add the f option on that rm -r / command?”; you can just look. If you need to do the same thing on another system, you can reproduce what you did exactly.

Read more…

Comments: 3