Drones might never find meaningful retail delivery work, but they might find practical employment in warehouses.
After writing my short post about the use of drones to deliver packages, it occurred to me that there’s one more realistic use case. Unfortunately (or not), this is a use case that you’ll never see if you’re not an Amazon employee. But I think it’s very realistic. And obviously, I just can’t get drones out of my head.
As I argued, I don’t think you’ll see drones for retail delivery, except perhaps as a high-cost, very conspicuous consumption frill. What could get more conspicuous? Drone pilots are expensive, and I don’t think we’ll see regulations that allow autonomous drones flying in public airspace any time soon. Drones also aren’t terribly fast, and even if you assume that the warehouses are relatively close to the customers, the number of trips a drone can make per hour are limited. There’s also liability, weather conditions, neighbors shooting the drones down, and plenty of other drawbacks.
These problems all disappear if you limit your use of drones to the warehouse itself. Don’t send the drone to the customer: that’s a significant risk for an expensive piece of equipment. Instead, use the drones within the warehouse to deliver items to the packers. Weather isn’t an issue. Regulation isn’t an issue; the FAA doesn’t care what you do inside your building. Autonomous flight isn’t just a realistic option, it’s preferable: one massive computing system can coordinate and optimize the flight paths of all the drones. Amazon probably has some of that system built already for its Kiva robots, and Amazon is rather good at building large computing architectures. Distance isn’t an issue. Warehouses are big, but they’re not that big, and something (or someone) has to bring the product to the packing station, whether it’s a human runner or a Kiva robot. Read more…
Biological products have always seemed far off. BioFabricate showed that they're not.
The products discussed at BioFabricate aren’t what I thought they’d be. I’ve been asked plenty of times (and I’ve asked plenty of times), “what’s the killer product for synthetic biology?” BioFabricate convinced me that that’s the wrong question. We may never have some kind of biological iPod. That isn’t the right way to think.
What I saw, instead, was real products that you might never notice. Bricks made from sand that are held together by microbes designed to excrete the binder. Bricks and packing material made from fungus (mycelium). Plastic excreted by bacteria that consume waste methane from sewage plants. You wouldn’t know, or care, whether your plastic Lego blocks are made from petroleum or from bacteria, but there’s a huge ecological difference. You wouldn’t know, or care, what goes into the bricks used in the new school, but the construction boom in Dubai has made a desert city one of the world’s largest importers of sand. Wind-blown desert sand isn’t useful for industrial brickmaking, but the microbes have no problem making bricks from it. And you may not care whether packing materials are made of styrofoam or fungus, but I despise the bag of packing peanuts sitting in my basement waiting to be recycled. You can throw the fungal packing material into the garden, and it will decompose into fertilizer in a couple of days. Read more…
For the time being, we won't see drone delivery outside of a few very specialized use cases.
I read with some interest an article on the Robotenomics blog about the feasibility of drone delivery. It’s an interesting idea, and the article makes a better case than anything I’ve seen before. But I’m still skeptical.
The article quotes direct operating costs (essentially fuel) that are roughly $0.10 for a 2-kilogram payload, delivered 10 kilometers. (For US-residents, that’s 4.4 pounds and about six miles). That’s reasonable enough.
The problem comes when he compares it to Amazon’s current shipping costs, of $2 to $8. But it sounds roughly like what Amazon pays to UPS or FedEx. And that’s not for delivering four pounds within a six-mile range. And it’s not just the fuel cost: it’s the entire cost, including maintenance, administrative overhead, executive bonuses, and (oh, yes) the driver’s salary. Read more…
Antha is a high-level, open source language for specifying biological workflows.
Editor’s note: This is part of our investigation into synthetic biology and bioengineering. For more, download the new BioCoder Fall 2014 issue here.In a couple of recent posts, I’ve written about the need for a high-level programming language for biology. Now we have one. Antha is a high-level, open source language for specifying biological workflows (i.e., describing experiments). It’s available on Github.
A programming language for scientific experiments is important for many reasons. Most simply, a scientist in training spends many, many hours of time learning how to do lab work. That sounds impressive, but it really means moving very small amounts of liquid from one place to another. Thousands of times a day, thousands of days in preparation for a career. It’s boring, dull, and necessary work, and something that can be automated. Biologists should spend most of their time thinking about biology, designing experiments, and analyzing results — not handling liquids. Read more…
Uber has built a great service. Why do they feel the need to use dirty tricks to succeed?
Tim O’Reilly has said that Uber is an example of designing for how the world ought to be. Their app works well, their cars are clean, their drivers are pleasant, and they usually arrive quickly. But more goes into the experience of a company than just an app. Corporate behavior is also part of the company’s design; perhaps not as noticeable as their Android or iPhone app, but a very real part. That’s where Uber falls down. They have increasingly been a bad actor, on many counts:
- Coercing their black car (Uber) drivers into driving for the low cost UberX service, which is much less profitable.
- Being disingenuous about the economics of driving for them. Justin Singer does an excellent job of deconstructing their claims. $90,000/year for a 40-hour work week? Think $40K. For a 70-hour work week.
- Badmouthing a competitor (Lyft) that is raising capital. As Fred Wilson says, this practice may be common, but it’s unethical and unproductive.
- Predatory (“surge”) pricing during peak hours, as much as seven times normal prices.
- Playing fast and loose with drivers’ background checks.
- And now one of their senior VPs has suggested researching and exposing the private lives of reporters who criticize them. He’s apologized, and said he never meant anything of the sort. Right. It’s not what you apologize for that counts; it’s not doing stuff you need to apologize for in the first place.
If you really want to understand the effect data is having, you need the models.
Writing my post about AI and summoning the demon led me to re-read a number of articles on Cathy O’Neil’s excellent mathbabe blog. I highlighted a point Cathy has made consistently: if you’re not careful, modelling has a nasty way of enshrining prejudice with a veneer of “science” and “math.”
Cathy has consistently made another point that’s a corollary of her argument about enshrining prejudice. At O’Reilly, we talk a lot about open data. But it’s not just the data that has to be open: it’s also the models. (There are too many must-read articles on Cathy’s blog to link to; you’ll have to find the rest on your own.)
You can have all the crime data you want, all the real estate data you want, all the student performance data you want, all the medical data you want, but if you don’t know what models are being used to generate results, you don’t have much. Read more…
We need to understand our own intelligence is competition for our artificial, not-quite intelligences.
A few days ago, Elon Musk likened artificial intelligence (AI) to “summoning the demon.” As I’m sure you know, there are many stories in which someone summons a demon. As Musk said, they rarely turn out well.
There’s no question that Musk is an astute student of technology. But his reaction is misplaced. There are certainly reasons for concern, but they’re not Musk’s.
The problem with AI right now is that its achievements are greatly over-hyped. That’s not to say those achievements aren’t real, but they don’t mean what people think they mean. Researchers in deep learning are happy if they can recognize human faces with 80% accuracy. (I’m skeptical about claims that deep learning systems can reach 97.5% accuracy; I suspect that the problem has been constrained some way that makes it much easier. For example, asking “is there a face in this picture?” or “where is the face in this picture?” is much different from asking “what is in this picture?”) That’s a hard problem, a really hard problem. But humans recognize faces with nearly 100% accuracy. For a deep learning system, that’s an almost inconceivable goal. And 100% accuracy is orders of magnitude harder than 80% accuracy, or even 97.5%. Read more…
We're at the start of a revolution in biology, and it's time for a biological commons.
A few months ago, I singled out an article in BioCoder about the appearance of open source biology. In his white paper for the Bio-Commons, Rüdiger Trojok writes about a significantly more ambitious vision for open biology: a bio-commons that holds biological intellectual property in trust for the good of all. He also articulates the tragedy of the anticommons, the nightmarish opposite of a bio-commons in which progress is difficult or impossible because “ambiguous and competing intellectual property claims…deter sharing and weaken investment incentives.” Each individual piece of intellectual property is carefully groomed and preserved, but it’s impossible to combine the elements; it’s like a jigsaw puzzle, in which every piece is locked in a separate safe.
We’ve certainly seen the anticommons in computing. Patent trolls are a significant disincentive to innovation; regardless of how weak the patent claim may be, most start-ups just don’t have the money to defend. Could biotechnology head in this direction, too? In the U.S., the Supreme Court has ruled that human genes cannot be patented. But that ruling doesn’t apply to genes from other organisms, and arguably doesn’t apply to modifications of human genes. (I don’t know the status of genetic patents in other countries.) The patentability of biological “inventions” has the potential to make it more difficult to do cutting-edge research in areas like synthetic biology and pharmaceuticals (Trojok points specifically to antibiotics, where research is particularly stagnant). Read more…
New issue: bioreactors and food production, modeling a worm's brain on a computer and letting it drive a robot, and more.
The fifth issue of BioCoder is here! We’ve made it into our second year: this revolution is in full swing.
Rather than talk about how great this issue is (though it is great), I’d like to ask a couple of questions. Post your answers in the comments; we won’t necessarily reply, but we will will read them and take them into account.
- We are always interested in new content, and we’ll take a look at almost anything you send to BioCoder@oreilly.com. In particular, we’d like to get more content from the many biohacker labs, incubators, etc. We know there’s a lot of amazing experimentation out there. But we don’t know what it is; we only see the proverbial tip of the iceberg. What’s the best way to find out what’s going on?
- While we’ve started BioCoder as a quarterly newsletter, that’s a format that already feels a bit stodgy. Would you be better served if BioCoder went web-native? Rather than publishing eight or 10 articles every three months, we’d publish three or four articles a month online. Would that be more useful? Or do you like things the way they are?
And yes, we do have a great issue, with articles about a low-cost MiniPCR, bioreactors and food production, and what happens when you model a worm’s brain on a computer and let it drive a robot. Plus, an interview with Kyle Taylor of the glowing plant project, the next installment in a series on lab safety, and much more. Read more…
Before you ask HR to find a developer skilled in a particular tool or language, think about who you really want in that seat.
I had a conversation recently with Martin Thompson (@mjpt777), a London-based developer who specializes in performance and low-latency systems. I learned about Martin through Kevlin Henney’s Tweets about his recent talk at Goto Aarhus.
We talked about a disturbing trend in software development: Resume Driven Development, or RDD. Resume Driven Development happens when your group needs to hire a developer. It’s very hard to tell a non-technical HR person that you need someone who can make good decisions about software architecture, someone who knows the difference between clean code and messy code, and someone who’s able to look at a code base and see what’s unnecessary and what can be simplified. We frequently can’t do that ourselves. So management says, “oh, we just added Redis to the application, so we’ll need a Redis developer.” That’s great — it’s easy to throw out resumes that don’t say Redis; it’s easy to look for certifications; and sooner or later, you have a Redis developer at a desk. Maybe even a good one.
And what does your Redis developer do? He does Redis, of course. So, you’re bound to have an application with a lot of Redis in it. Whenever he sees a problem that can be solved with Redis, that’s what he’ll do. It’s what you hired him for. You’re happy; he’s happy. Except your application is now being optimized to fit the resumes of the people you hired, not the requirements of your users. Read more…