- Rise of the Patent Troll: Everything is a Remix (YouTube) — primer on patent trolls, in language anyone can follow. Part of the fixpatents.org campaign. (via BoingBoing)
- Petabytes of Field Data (GigaOm) — Farm Intelligence using sensors and computer vision to generate data for better farm decision making.
- Bullish on Blockchain (Fred Wilson) — our 2014 fund will be built during the blockchain cycle. “The blockchain” is bitcoin’s distributed consensus system, interesting because it’s the return of p2p from the Chasm of Ridicule or whatever the Gartner Trite Cycle calls the time between first investment bubble and second investment bubble under another name.
- Hemingway — online writing tool to help you make your writing clear and direct. (via Nina Simon)
Developers who understand the whole stack are going to build better applications.
Since Facebook’s Carlos Bueno wrote the canonical article about the full stack, there has been no shortage of posts trying to define it. For a time, Facebook allegedly only hired “full-stack developers.” That probably wasn’t quite true, even if they thought it was. And some posts really push “full-stack” developer into Unicorn territory: Laurence Gellert writes that it “goes beyond being a senior engineer,” and details everything he thinks a full-stack developer should be familiar with, most of which doesn’t involve coding.
Lists like Gellert’s are both too long and too short; and, while I agree that a full-stack developer and a senior engineer aren’t necessarily the same people, I resist the idea that full-stack developers have near-magical skills in many different areas. (So does Gellert; he asks for “familiarity in each layer, if not mastery.”) At the same time, I’d add several items to the list that he only hints at: source control, data infrastructure, distributed computing, etc.
With that in mind, we can try to start by defining the stack. We can start with the now-ancient LAMP stack: Linux, Apache, MySQL, Perl. That list is only partial and certainly dated. Linux and Apache are still with us, though there are other servers like nginx that are gaining importance; MySQL is still around, though we now have dozens of post-relational databases (most notably MongoDB and Cassandra); and I wouldn’t be surprised to see MariaDB displace MySQL in the next few years. Nobody writes CGI programs in Perl any more; many languages come into play, from Haskell to Java. But even though dated, the LAMP stack has the right idea: an operating system, a server, a database, middleware. Read more…
Rise of the Patent Troll, Farm Data, The Block Chain, and Better Writing
Natural bioterrorism might be the bigger threat, and the value of citizens educated in biosciences can't be overstated.
You don’t get very far discussing synthetic biology and biohacking before someone asks about bioterrorism. So, let’s meet the monster head-on.
I won’t downplay the possibility of a bioterror attack. It’s already happened. The Anthrax-contaminated letters that were sent to political figures just after 9/11 were certainly an instance of bioterrorism. Fortunately (for everyone but the victims), they only resulted in five deaths, not thousands. Since then, there have been a few “copycat” crimes, though using a harmless white powder rather than Anthrax spores.
While I see bioterror in the future as a certainty, I don’t believe it will come from a hackerspace. The 2001 attacks are instructive: the spores were traced to a U.S. biodefense laboratory. Whether or not you believe Bruce Ivins, the lead suspect, was guilty, it’s clear that the Anthrax spores were developed by professionals and could not have been developed outside of a professional setting. That’s what I expect for future attacks: the biological materials, whether spores, viruses, or bacteria, will come from a research laboratory, produced with government funding. Whether they’re stolen from a U.S. lab or produced overseas: take your pick. They won’t come from the hackerspace down the street. Read more…
The bid for widespread home use may drive technical improvements.
For some people, it’s too early to plan mass consumerization of the Internet of Things. Developers are contentedly tinkering with Arduinos and clip cables, demonstrating cool one-off applications. We know that home automation can save energy, keep the elderly and disabled independent, and make life better for a lot of people. But no one seems sure how to realize this goal, outside of security systems and a few high-end items for luxury markets (like the Nest devices, now being integrated into Google’s grand plan).
But what if the willful creation of a mass consumer market could make the technology even better? Perhaps the Internet of Things needs a consumer focus to achieve its potential. This view was illuminated for me through a couple recent talks with Mike Harris, CEO of the home automation software platform Zonoff.
Internet of Listeners, Mobile Deep Belief, Crowdsourced Spectrum Data, and Quantum Minecraft
- Jasper Project — an open source platform for developing always-on, voice-controlled applications. Shouting is the new swiping—I eagerly await Gartner touting the Internet-of-things-that-misunderstand-you.
- DeepBeliefSDK — deep neural network library for iOS. (via Pete Warden)
- Microsoft Spectrum Observatory — crowdsourcing spectrum utilisation information. Just open sourced their code.
- qcraft — beginner’s guide to quantum physics in Minecraft. (via Nelson Minar)
Ignore the hype. Learn to be a data skeptic.
Yawn. Yet another article trashing “big data,” this time an op-ed in the Times. This one is better than most, and ends with the truism that data isn’t a silver bullet. It certainly isn’t.
I’ll spare you all the links (most of which are much less insightful than the Times piece), but the backlash against “big data” is clearly in full swing. I wrote about this more than a year ago, in my piece on data skepticism: data is heading into the trough of a hype curve, driven by overly aggressive marketing, promises that can’t be kept, and spurious claims that, if you have enough data, correlation is as good as causation. It isn’t; it never was; it never will be. The paradox of data is that the more data you have, the more spurious correlations will show up. Good data scientists understand that. Poor ones don’t.
It’s very easy to say that “big data is dead” while you’re using Google Maps to navigate downtown Boston. It’s easy to say that “big data is dead” while Google Now or Siri is telling you that you need to leave 20 minutes early for an appointment because of traffic. And it’s easy to say that “big data is dead” while you’re using Google, or Bing, or DuckDuckGo to find material to help you write an article claiming that big data is dead. Read more…