Antha is a high-level, open source language for specifying biological workflows.
Editor’s note: This is part of our investigation into synthetic biology and bioengineering. For more, download the new BioCoder Fall 2014 issue here.In a couple of recent posts, I’ve written about the need for a high-level programming language for biology. Now we have one. Antha is a high-level, open source language for specifying biological workflows (i.e., describing experiments). It’s available on Github.
A programming language for scientific experiments is important for many reasons. Most simply, a scientist in training spends many, many hours of time learning how to do lab work. That sounds impressive, but it really means moving very small amounts of liquid from one place to another. Thousands of times a day, thousands of days in preparation for a career. It’s boring, dull, and necessary work, and something that can be automated. Biologists should spend most of their time thinking about biology, designing experiments, and analyzing results — not handling liquids. Read more…
Uber has built a great service. Why do they feel the need to use dirty tricks to succeed?
Tim O’Reilly has said that Uber is an example of designing for how the world ought to be. Their app works well, their cars are clean, their drivers are pleasant, and they usually arrive quickly. But more goes into the experience of a company than just an app. Corporate behavior is also part of the company’s design; perhaps not as noticeable as their Android or iPhone app, but a very real part. That’s where Uber falls down. They have increasingly been a bad actor, on many counts:
- Coercing their black car (Uber) drivers into driving for the low cost UberX service, which is much less profitable.
- Being disingenuous about the economics of driving for them. Justin Singer does an excellent job of deconstructing their claims. $90,000/year for a 40-hour work week? Think $40K. For a 70-hour work week.
- Badmouthing a competitor (Lyft) that is raising capital. As Fred Wilson says, this practice may be common, but it’s unethical and unproductive.
- Predatory (“surge”) pricing during peak hours, as much as seven times normal prices.
- Playing fast and loose with drivers’ background checks.
- And now one of their senior VPs has suggested researching and exposing the private lives of reporters who criticize them. He’s apologized, and said he never meant anything of the sort. Right. It’s not what you apologize for that counts; it’s not doing stuff you need to apologize for in the first place.
If you really want to understand the effect data is having, you need the models.
Writing my post about AI and summoning the demon led me to re-read a number of articles on Cathy O’Neil’s excellent mathbabe blog. I highlighted a point Cathy has made consistently: if you’re not careful, modelling has a nasty way of enshrining prejudice with a veneer of “science” and “math.”
Cathy has consistently made another point that’s a corollary of her argument about enshrining prejudice. At O’Reilly, we talk a lot about open data. But it’s not just the data that has to be open: it’s also the models. (There are too many must-read articles on Cathy’s blog to link to; you’ll have to find the rest on your own.)
You can have all the crime data you want, all the real estate data you want, all the student performance data you want, all the medical data you want, but if you don’t know what models are being used to generate results, you don’t have much. Read more…
We need to understand our own intelligence is competition for our artificial, not-quite intelligences.
A few days ago, Elon Musk likened artificial intelligence (AI) to “summoning the demon.” As I’m sure you know, there are many stories in which someone summons a demon. As Musk said, they rarely turn out well.
There’s no question that Musk is an astute student of technology. But his reaction is misplaced. There are certainly reasons for concern, but they’re not Musk’s.
The problem with AI right now is that its achievements are greatly over-hyped. That’s not to say those achievements aren’t real, but they don’t mean what people think they mean. Researchers in deep learning are happy if they can recognize human faces with 80% accuracy. (I’m skeptical about claims that deep learning systems can reach 97.5% accuracy; I suspect that the problem has been constrained some way that makes it much easier. For example, asking “is there a face in this picture?” or “where is the face in this picture?” is much different from asking “what is in this picture?”) That’s a hard problem, a really hard problem. But humans recognize faces with nearly 100% accuracy. For a deep learning system, that’s an almost inconceivable goal. And 100% accuracy is orders of magnitude harder than 80% accuracy, or even 97.5%. Read more…
We're at the start of a revolution in biology, and it's time for a biological commons.
A few months ago, I singled out an article in BioCoder about the appearance of open source biology. In his white paper for the Bio-Commons, Rüdiger Trojok writes about a significantly more ambitious vision for open biology: a bio-commons that holds biological intellectual property in trust for the good of all. He also articulates the tragedy of the anticommons, the nightmarish opposite of a bio-commons in which progress is difficult or impossible because “ambiguous and competing intellectual property claims…deter sharing and weaken investment incentives.” Each individual piece of intellectual property is carefully groomed and preserved, but it’s impossible to combine the elements; it’s like a jigsaw puzzle, in which every piece is locked in a separate safe.
We’ve certainly seen the anticommons in computing. Patent trolls are a significant disincentive to innovation; regardless of how weak the patent claim may be, most start-ups just don’t have the money to defend. Could biotechnology head in this direction, too? In the U.S., the Supreme Court has ruled that human genes cannot be patented. But that ruling doesn’t apply to genes from other organisms, and arguably doesn’t apply to modifications of human genes. (I don’t know the status of genetic patents in other countries.) The patentability of biological “inventions” has the potential to make it more difficult to do cutting-edge research in areas like synthetic biology and pharmaceuticals (Trojok points specifically to antibiotics, where research is particularly stagnant). Read more…
New issue: bioreactors and food production, modeling a worm's brain on a computer and letting it drive a robot, and more.
The fifth issue of BioCoder is here! We’ve made it into our second year: this revolution is in full swing.
Rather than talk about how great this issue is (though it is great), I’d like to ask a couple of questions. Post your answers in the comments; we won’t necessarily reply, but we will will read them and take them into account.
- We are always interested in new content, and we’ll take a look at almost anything you send to BioCoder@oreilly.com. In particular, we’d like to get more content from the many biohacker labs, incubators, etc. We know there’s a lot of amazing experimentation out there. But we don’t know what it is; we only see the proverbial tip of the iceberg. What’s the best way to find out what’s going on?
- While we’ve started BioCoder as a quarterly newsletter, that’s a format that already feels a bit stodgy. Would you be better served if BioCoder went web-native? Rather than publishing eight or 10 articles every three months, we’d publish three or four articles a month online. Would that be more useful? Or do you like things the way they are?
And yes, we do have a great issue, with articles about a low-cost MiniPCR, bioreactors and food production, and what happens when you model a worm’s brain on a computer and let it drive a robot. Plus, an interview with Kyle Taylor of the glowing plant project, the next installment in a series on lab safety, and much more. Read more…
Before you ask HR to find a developer skilled in a particular tool or language, think about who you really want in that seat.
I had a conversation recently with Martin Thompson (@mjpt777), a London-based developer who specializes in performance and low-latency systems. I learned about Martin through Kevlin Henney’s Tweets about his recent talk at Goto Aarhus.
We talked about a disturbing trend in software development: Resume Driven Development, or RDD. Resume Driven Development happens when your group needs to hire a developer. It’s very hard to tell a non-technical HR person that you need someone who can make good decisions about software architecture, someone who knows the difference between clean code and messy code, and someone who’s able to look at a code base and see what’s unnecessary and what can be simplified. We frequently can’t do that ourselves. So management says, “oh, we just added Redis to the application, so we’ll need a Redis developer.” That’s great — it’s easy to throw out resumes that don’t say Redis; it’s easy to look for certifications; and sooner or later, you have a Redis developer at a desk. Maybe even a good one.
And what does your Redis developer do? He does Redis, of course. So, you’re bound to have an application with a lot of Redis in it. Whenever he sees a problem that can be solved with Redis, that’s what he’ll do. It’s what you hired him for. You’re happy; he’s happy. Except your application is now being optimized to fit the resumes of the people you hired, not the requirements of your users. Read more…
Network neutrality is about treating all kinds of traffic equally — throttling competition equates to extortion.
I’d like to make a few very brief points about net neutrality. For most readers of Radar, there’s probably nothing new here, but they address confusions that I’ve seen.
- Network neutrality isn’t about the bandwidth that Internet service providers deliver to your home. ISPs can charge more for more bandwidth, same as always.
- Nor is network neutrality about the bandwidth that Internet service providers deliver to information providers. Again, ISPs can charge more for more bandwidth, same as always. You’d better believe that Google pays a lot more for Internet service than your local online store.
- Nor is network neutrality about ISPs dealing with congestion. Network providers have always dealt with congestion — in the worst case, by dropping traffic. Remember the “fast busy” signal on the phone? That’s the network dealing with congestion.
- Network neutrality is entirely about treating all kinds of traffic equally. Video is the same as voice, the same as Facebook, the same as Amazon. Your ISP cannot penalize video traffic (or some other kind of traffic) because they’d like to get into that business or because they’re already in that business. In other words: when you buy Internet connectivity, you can use it for whatever you want. Your provider can’t tell you what kind of business to be in.
The 13 principles in the U.S. CIO's Digital Services Playbook are applicable for everyone.
Whenever I hear someone say that “government should be run like a business,” my first reaction is “do you know how badly most businesses are run?” Seriously. I do not want my government to run like a business — whether it’s like the local restaurants that pop up and die like wildflowers, or megacorporations that sell broken products, whether financial, automotive, or otherwise.
If you read some elements of the press, it’s easy to think that healthcare.gov is the first time that a website failed. And it’s easy to forget that a large non-government website was failing, in surprisingly similar ways, at roughly the same time. I’m talking about the Common App site, the site high school seniors use to apply to most colleges in the US. There were problems with pasting in essays, problems with accepting payments, problems with the app mysteriously hanging for hours, and more.
Synthetic biology surely can get weirder — but this is a great start.
If you’ve ever tried any of the various vegan cheese substitutes, they are (to put it kindly) awful. The missing ingredient in these products is the milk proteins, or caseins. And of course you can’t use real milk proteins in a vegan product.
But proteins are just organic compounds that are produced, in abundance, by any living cell. And synthetic biology is about engineering cell DNA to produce whatever proteins we want. That’s the central idea behind the Real Vegan Cheese project: can we design yeast to produce the caseins we need for cheese, without involving any animals? There’s no reason we can’t. Once we have the milk proteins, we can use traditional processes to make the cheese. No cows (or sheep, or goats) involved, just genetically modified yeast. And you never eat the yeast; they stay behind at the brewery.