All trust is misplaced. And that's probably the way it should be.
In the wake of Heartbleed, there’s been a chorus of “you can’t trust open source! We knew it all along.”
It’s amazing how short memories are. They’ve already forgotten Apple’s GOTO FAIL bug, and their sloppy rollout of patches. They’ve also evidently forgotten weaknesses intentionally inserted into commercial security products at the request of certain government agencies. It may be more excusable that they’ve forgotten hundreds, if not thousands, of Microsoft vulnerabilities over the years, many of which continue to do significant harm.
Yes, we should all be a bit spooked by Heartbleed. I would be the last person to argue that open source software is flawless. As Eric Raymond said, “With enough eyes, all bugs are shallow,” and Heartbleed was certainly shallow enough, once those eyes saw it. Shallow, but hardly inconsequential. And even enough eyes can have trouble finding bugs in a rat’s nest of poorly maintained code. The Core Infrastructure Initiative, which promises to provide better funding (and better scrutiny) for mission-critical projects such as OpenSSL, is a step forward, but it’s not a magic bullet that will make vulnerabilities go away.
Hacking lab equipment to make it programmable is a good first step toward lab automation.
In the new issue of BioCoder, Peter Sand writes about Hacking Lab Equipment. It’s well worth a read: it gives a number of hints about how standard equipment can be modified so that it can be controlled by a program. This is an important trend I’ve been watching on a number of levels, from fully robotic labs to much more modest proposals, like Sand’s, that extend programmability even to hacker spaces and home labs.
In talking to biologists, I’m surprised at how little automation there is in research labs. Automation in industrial labs, the sort that process thousands of blood and urine samples per hour, yes: that exists. But in research labs, undergrads, grad students, and post-docs spend countless hours moving microscopic amounts of liquid from one place to another. Why? It’s not science; it’s just moving stuff around. What a waste of mental energy and creativity.
Lab automation, though, isn’t just about replacing countless hours of tedium with opportunities for creative thought. I once talked to a system administrator who wrote a script for everything, even for only a simple one-liner. (Might have been @yesthattom, I don’t remember.) This practice is based on an important insight: writing a script documents exactly what you did. You don’t have to think about, “oh, did I add the f option on that rm -r / command?”; you can just look. If you need to do the same thing on another system, you can reproduce what you did exactly.
Vendors, take note: we will not build the Internet of Things without open standards.
In a couple of posts and articles, we’ve nibbled around the notion of standards, interoperability, and the Internet of Things (or the Internet of Everything, or the Industrial Internet, or whatever you want to call it). It’s time to say it loud and clear: we won’t build the Internet of Things without open standards.
What’s important about the IoT typically isn’t what any single device can do. The magic happens when multiple devices start interacting with each other. Nicholas Negroponte rightly criticizes the flood of boring Internet-enabled devices: an oven that can be controlled by your phone, a washing machines that texts you when it’s done, and so on. An oven gets interesting when it detects the chicken you put in it, and sets itself accordingly. A washing machine gets interesting if it can detect the clothes you’re putting into it and automatically determine what cycle to run. That requires standards for how the washer communicates with the washed. It’s meaningless if every clothing manufacturer implements a different, proprietary standard for NFC-enabled tags.
We’re already seeing this in lighting: there are several manufacturers of smart network-enabled light bulbs, but as far as I can tell, each one is controlled by a vendor-specific app. And I can think of nothing worse for the future of home lighting than having to remember whether the lights in the bedroom were made by Sylvania or Philips before I can turn them off. Read more…
There's good reason to believe nature has clues about how to do a good job — can it also help with web designs?
A couple of years ago, I visited the World Science Festival in New York and saw Festo’s robotic bird. It was amazing. I’ve seen things that looked more or less like a bird, and that flew, but clearly weren’t flying like a bird. An airplane has a body, has wings, and flies, but you wouldn’t mistake it for a bird. This was different: it looked like a giant seagull, with head and tail movements that were clearly modelled on a living bird’s.
Since then, Festo has built a robotic kangaroo; based on work they started in 2010, they have a robotic elephant’s trunk that learns, a robotic jellyfish, and no doubt many other animals that I haven’t yet seen.
Advances in biology and biotechnology are driving us in exciting new directions — be part of the revolution!
We’re excited about the third issue of BioCoder, O’Reilly’s newsletter about the revolution in biology and biotechnology. In the first article of our new issue, Ryan Bethencourt asks the question “What does Biotechnology Want?” Playing with Kevin Kelly’s ideas about how technological development drives human development, Bethencourt asks about the directions in which biotechnology is driving us. We’re looking for a new future with significant advances in agriculture, food, health, environmental protection, and more.
That future will be ours — if we choose to make it. Bethencourt’s argument (and Kelly’s) is that we can’t not choose to make it. Yes, there are plenty of obstacles: the limits to our understanding of biology and genetics, the inadequate tools we have for doing research, the research institutions themselves, and even fear of the future. We’ll overcome these obstacles; indeed, if Bethencourt is right, and biology is our destiny, we have no choice but to overcome these obstacles. The only question is whether you’re part of the revolution or not.
In the future, we will solve biological problems by running experiments in parallel.
Perhaps the most ambitious project right now is Synbiota’s #ScienceHack. They are organizing a large number of volunteer groups to experiment with techniques to produce the compound Violacein. Violacein is potentially useful as an anti-cancer and anti-dysentery drug, but currently costs $356,000 per gram to produce. This price makes research (to say nothing of therapeutic use) impossible. However, it’s possible that bacteria can be genetically engineered to produce Violacein much more efficiently and cheaply. That’s what the #ScienceHack experiment is about: the groups will be trying to design DNA that can be inserted into E. coli bacteria to make it produce Violacein at a fraction of the cost. Read more…
Developers who understand the whole stack are going to build better applications.
Since Facebook’s Carlos Bueno wrote the canonical article about the full stack, there has been no shortage of posts trying to define it. For a time, Facebook allegedly only hired “full-stack developers.” That probably wasn’t quite true, even if they thought it was. And some posts really push “full-stack” developer into Unicorn territory: Laurence Gellert writes that it “goes beyond being a senior engineer,” and details everything he thinks a full-stack developer should be familiar with, most of which doesn’t involve coding. Read more…
Natural bioterrorism might be the bigger threat, and the value of citizens educated in biosciences can't be overstated.
You don’t get very far discussing synthetic biology and biohacking before someone asks about bioterrorism. So, let’s meet the monster head-on.
I won’t downplay the possibility of a bioterror attack. It’s already happened. The Anthrax-contaminated letters that were sent to political figures just after 9/11 were certainly an instance of bioterrorism. Fortunately (for everyone but the victims), they only resulted in five deaths, not thousands. Since then, there have been a few “copycat” crimes, though using a harmless white powder rather than Anthrax spores.
While I see bioterror in the future as a certainty, I don’t believe it will come from a hackerspace. The 2001 attacks are instructive: the spores were traced to a U.S. biodefense laboratory. Whether or not you believe Bruce Ivins, the lead suspect, was guilty, it’s clear that the Anthrax spores were developed by professionals and could not have been developed outside of a professional setting. That’s what I expect for future attacks: the biological materials, whether spores, viruses, or bacteria, will come from a research laboratory, produced with government funding. Whether they’re stolen from a U.S. lab or produced overseas: take your pick. They won’t come from the hackerspace down the street. Read more…
Ignore the hype. Learn to be a data skeptic.
Yawn. Yet another article trashing “big data,” this time an op-ed in the Times. This one is better than most, and ends with the truism that data isn’t a silver bullet. It certainly isn’t.
I’ll spare you all the links (most of which are much less insightful than the Times piece), but the backlash against “big data” is clearly in full swing. I wrote about this more than a year ago, in my piece on data skepticism: data is heading into the trough of a hype curve, driven by overly aggressive marketing, promises that can’t be kept, and spurious claims that, if you have enough data, correlation is as good as causation. It isn’t; it never was; it never will be. The paradox of data is that the more data you have, the more spurious correlations will show up. Good data scientists understand that. Poor ones don’t.
It’s very easy to say that “big data is dead” while you’re using Google Maps to navigate downtown Boston. It’s easy to say that “big data is dead” while Google Now or Siri is telling you that you need to leave 20 minutes early for an appointment because of traffic. And it’s easy to say that “big data is dead” while you’re using Google, or Bing, or DuckDuckGo to find material to help you write an article claiming that big data is dead. Read more…
Current wearable computing technology is just scratching the surface — the really interesting tech has yet to be invented.
In an interview at SXSW, Google’s Sundar Pichai said something about wearables that I’ve been waiting to hear. Wearables aren’t about Google Glass; they aren’t about smart watches; they’re much, much more, and these technologies are only scratching the surface.
I’ve tweaked Apple a couple of times for their inability to deliver a watch, despite years of leaks and rumors. I suspect that products from competitors have forced them to pivot a few times, rethinking and delaying their product. But the bottom line is that I don’t care; I don’t wear a watch, haven’t for a long time, and I’m not about to start. Just not interested.
I’m more interested in Glass, but I’ve been amazed at how few people are listening to what Google has said about it: it’s an experiment. It’s not the endpoint, not the product. Given the excitement it has produced, Google would be foolish not to sell it. But really: it’s ugly, it’s a prototype, it’s a mockup. Five years from now, will we all be walking around with Google Glass hanging from designer frames? I doubt it. And I bet Pichai, Brin, and Page doubt it, too. It’s an experiment; it will show us what’s interesting, and point toward what to build next. It’s not the end result. Read more…