Big data’s impact on global agriculture

The O'Reilly Radar Podcast: Stewart Collis talks about making precision farming accessible and affordable for all farmers.

inside_Martin_Fisc_Flickr

Stewart Collis, CTO and co-founder of AWhere, recently tweeted a link to a video by the University of Minnesota’s Institute on the Environment, Big Question: Feast or Famine? The video highlights the increasing complexity of feeding our rapidly growing population, and Collis noted its relation to his work at AWhere. I recently caught up with Collis to talk about our current global agriculture situation, the impact of big data on agriculture, and the work his company is doing to help address global agriculture problems.

The challenge, explained Collis, is two-fold: our growing population — expected to increase by another 2.4 billion people by 2050, and the increasing weather variability affecting our growing seasons and farmers’ abilities to produce and scale to accommodate that population. “In the face of weather variability, climate change, and increasing temperatures … farmers no longer know when it’s going to rain,” he said, and then noted: “There’s only 34 growing seasons between now and [2050], so this is a problem we need to solve now.”

Read more…

Comment: 1

What should replace the project paradigm?

Creating alignment at scale within enterprises.

bubble_wrap

The problems caused by using the project paradigm to delivering software systems are severe. The effect of projects on downstream teams such as release and operations were, for my money, most succinctly articulated in Evan Bottcher’s article “PROJECTS ARE EVIL AND MUST BE DESTROYED“. The end result — complex, heterogeneous production environments that are hard to change or even keep running — is due to the forces Charles Betz identifies in Architecture and Patterns for IT Service Management, Resource Planning, and Governance: Making Shoes for the Cobbler’s Children:

Because it is the best-understood area of IT activity, the project phase is often optimized at the expense of the other process areas, and therefore at the expense of the entire value chain. The challenge of IT project management is that broader value-chain objectives are often deemed “not in scope” for a particular project, and projects are not held accountable for their contributions to overall system entropy.

Furthermore, bundling work up into projects combines low-value features with high-value features in order to deliver the maximum viable product that is the inevitable result of the large-batch death spiral. This occurs when product owners try and stuff as many features as possible into the next release so they don’t have to wait for the one after in order to get them delivered. As a result, the median cycle time for delivering features is often poorly correlated with their priority — a highly undesirable outcome.

Why do we stick with it? Because our budgeting processes are designed to operate on projects, and project managers and the PMO know how to deliver them.

Since these are clearly poor reasons, what should we do instead?

Read more…

Comment: 1

Turning Ph.D.s into industrial data scientists and data engineers

The O'Reilly Data Show Podcast: Angie Ma on building a finishing school for science and engineering doctorates.

graduation_caps_John_Walker_Flickr

Editor’s note: The ASI will offer a two-day intensive course, Practical Machine Learning, at Strata + Hadoop World in London in May.

Back when I was considering leaving academia, the popular exit route was financial engineering. Many science and engineering Ph.D.s ended up in big Wall Street banks; I chose to be the lead quant at a small hedge fund — it was a natural choice for many of us. Financial engineering was topically close to my academic interests, and working with traders meant access to resources and interesting problems.

Today, there are many more options for people with science and engineering doctorates. A few organizations take science and engineering Ph.D.s, and over the course of 8-12 weeks, prepare them to join the ranks of industrial data scientists and data engineers.

I recently sat down with Angie Ma, co-founder and president of ASI, a London startup that runs a carefully structured “finishing school” for science and engineering doctorates. We talked about how Angie and her co-founders (all ex-physicists) arrived at the concept of the ASI, the structure of their training programs, and the data and startup scene in the UK. [Full disclosure: I’m an advisor to the ASI.] Read more…

Comment

Full-stack tensions on the Web

How much do you need to know?

Vista_de_la_Biblioteca_VasconcelosI expected that CSSDevConf would be primarily a show about front-end work, focused on work in clients and specifically in browsers. I kept running into conversations, though, about the challenges of moving between the front and back end, the client and the server side. Some were from developers suddenly told that they had to become “full-stack developers” covering the whole spectrum, while others were from front-end engineers suddenly finding a flood of back-end developers tinkering with the client side of their applications. “Full-stack” isn’t always a cheerful story.

In the early days of the Web, “full-stack” was normal. While there were certainly people who focused on running web servers or designing sites as beautiful as the technology would allow, there were lots of webmasters who knew how to design a site, write HTML, manage a server, and maybe write some CGI code for early applications.

Formal separation of concerns among HTML, CSS, and JavaScript made it easier to share responsibilities among specialists. As the dot-com boom proceeded, specialization accelerated, with dedicated designers, programmers, and sysadmins coming to the work. Perhaps there were too many titles.

Even as the bust set in, specialization remained the trend because Web projects — especially on the server side — had grown far more complicated. They weren’t just a server and a few scripts, but a complete stack, including templates, logic, and usually a database. Whether you preferred the LAMP stack, a Microsoft ASP stack, or perhaps Java servlets and JSP, the server side rapidly became its own complex arena. Intranet development in particular exploded as a way to build server-based applications that could (cheaply) connect data sources to users on multiple platforms. Writing web apps was faster and cheaper than writing desktop apps, with more tolerance for platform variation.
Read more…

Comments: 4

Public vs. private cloud: Price isn’t enough

The risk relative to the savings isn’t enough to justify a shift to public cloud.

Servers_Paul_Hammond_Flickr

This post was originally published on Limn This. The lightly edited version that follows is republished with permission.

Last October, Simon Wardley and I stood on a rainy sidewalk at 28th St. in New York City arguing politely (he’s British) about the future of cloud adoption. He argued, rightly, that the cost advantages from scale would be overwhelming compared to home-brew private clouds. He went on to argue, less certainly in my view, that this would lead inevitably to their wholesale and deep adoption across the enterprise market.

I think Simon bases his argument on something like the rational economic man theory of the enterprise. Or, more specifically, the rational economic chief financial officer (CFO). If the costs of a service provider are destined to be lower than the costs of internally operated alternatives, and your CFO is rational (most tend to be), then the conclusion is foregone.

And, of course, costs are going down just as they are predicted to. Look at this post by Avi Deitcher: Does Amazon’s Web Services Pricing Follow Moore’s Law? I think the question posed in the title has a fairly obvious answer. No. Services aren’t just silicon; they include all manner of linear terms, like labor, so the price decreases will almost certainly be slower than Moore’s Law, but his analysis of the costs of a modestly-sized AWS solution and in-house competition is really useful.

Not only is AWS’ price dropping fast (56% in three years), but it’s significantly cheaper than building and operating a platform in house. Avi does the math for 600 instances over three years and finds that the cost for AWS would be $1.1 million (I don’t think this number considers out-year price decreases) versus $2.3 million for DIY. Your mileage might vary, but these numbers are a nice starting point for further discussion.

These results raise an interesting question: if the numbers are so compelling, why did Walmart just reveal that they are building a ginormous private cloud? Why would anyone? Read more…

Comments: 10

Welcome to the new VR

Like the Internet in 1994, virtual reality is about to cross the chasm from core technologists to the wider world.

VRML_Worldbuilder_Jordi_R_Cardona_Flickr

When you’re an entrepreneur or investor struggling to bring a technology to market just a little before its time, being too early can feel exactly the same as being flat wrong. But with a bit more perspective, it’s clear that many of the hottest companies and products in today’s tech landscape are actually capitalizing on ideas that have been tried before — have, in some cases, been tackled repeatedly, and by very smart teams — but whose day has only now just arrived.

Virtual reality (VR) is one of those areas that has seduced many smart technologists in its long history, and its repeated commercial flameouts have left a lot of scar tissue in their wake. Despite its considerable ups and downs, though, the dream of VR has never died — far from it. The ultimate promise of the technology has been apparent for decades now, and many visionaries have devoted their careers to making it happen. But for almost 50 years, these dreams have outpaced the realities of price and performance.

To be fair, VR has come a long way in that time, though largely in specialized, under-the-radar domains that can support very high system costs and large installations; think military training and resource exploration. But the basic requirements for mass-market devices have never been met: low-power computing muscle; large, fast displays; and tiny, accurate sensors. Thanks to the smartphone supply chain, though, all of these components have evolved very rapidly in recent years — to the point where low-cost, high-quality, compact VR systems are now becoming available. Consumer VR really is coming on fast now, and things are getting very interesting. Read more…

Comments: 5