The end of integrated systems

The programmable world will increasingly rely on machine-learning techniques that can interact with human interfaces

I always travel with a pair of binoculars and, to the puzzlement of my fellow airline passengers, spend part of every flight gazing through them at whatever happens to be below us: Midwestern towns, Pennsylvania strip mines, rural railroads that stretch across the Nevada desert. Over the last 175 years or so, industrialized America has been molded into a collection of human patterns–not just organic New England villages in haphazard layouts, but also the Colorado farm settlements surveyed in strict grids. A close look at the rectangular shapes below reveals minor variations, though. Every jog that a street takes is a testament to some compromise between mankind and our environment that results in a deviation from mathematical perfection. The world, and our interaction with it, can’t conform strictly to top-down ideals.

We’re about to enter a new era in which computers will interact aggressively with our physical environment. Part of the challenge in building the programmable world will be finding ways to make it interact gracefully with the world as it exists today. In what you might call the virtual Internet, human information has been squeezed into the formulations of computer scientists: rigid relational databases, glyphs drawn from UTF-8, information addressed through the Domain Name System. To participate in it, we converse in the language and structures of computers. The physical Internet will need to understand much more flexibly the vagaries of human behavior and the conventions of the built environment.


A few weeks ago, I found myself explaining the premise of the programmable world to a stranger. He caught on when the conversation turned to driverless cars and intelligent infrastructure. “Aha!” he exclaimed, “They have these smart traffic lights in Los Angeles now; they could talk to the cars!”

The idea that centralized traffic computers will someday communicate wirelessly with vehicles, measuring traffic conditions and directing cars, was widely discussed a few years ago, but now it struck me as outdated. Computers can easily read traffic signals with machine vision the same way that humans do–by interpreting red, yellow, and green lights in the context of an intersection’s layout. On the other side, traffic signals often use cameras to analyze volumes and adjust to them. With these kinds of flexible-sensing technologies perfected, there will be no need for rigid, integrated systems to link cars to traffic lights. The example will repeat itself in other cases where software gathers data from sensors and controls physical devices.

The world is becoming programmable, with physical devices drawing intelligence from layers of software that can connect to widespread sensor networks and optimize entire systems. Cars drive themselves, thermostats silently adjust to our apparent preferences, wind turbines gird themselves for stormy gusts based on data collected from other turbines. These services are increasingly enabled not by rigid software infrastructure that connects giant systems end-to-end, but by ambient data-gathering, Web-like connections, and machine-learning techniques that help computers interact with the same user interfaces that humans rely on.

We’re headed for a re-hash of the API/Web scraping debate, and even more than on the Web, scraping is poised for an advantage since physical-world conventions have been under refinement for millennia. It’s remarkably easy to differentiate a men’s room door from a women’s room door, or to tell whether a door is locked by looking at the deadbolt, and software that can understand these distinctions is already available.

There will, of course, still be a very big place for integrated systems in the physical Internet. Environments built from scratch; machines that operate in formalized, well-contained environments; and systems whose first priority is reliability and clarity will all make good use of traditional APIs. Trains will always communicate with dispatchers through formal integrated systems, just as Twitter’s API is likely the best foundation for a Twitter app regardless of how sophisticated your Twitter-scraping software might be.

But building a dispatching system for a railroad is nothing like developing a car that could communicate, machine-to-machine, with any traffic light it might encounter. Governments would need to invest billions in retrofitting their signaling systems, cars would need to accommodate the multitude of standards that would emerge from the process, and in order for adoption to be practical, traffic-signal retrofits would need to be universal. It’s better to gently build systems layer by layer that can accept human convention as it already exists, and the technology to do that is emerging now.

A couple of other observations about the superiority of loosely-coupled services in the physical world:

  • Google Maps, Waze, and Inrix all harvest traffic data ambiently from connected devices that constantly report location and speed. The alternative, described before networked navigation devices were common, was to gather this data through connected infrastructure. State departments of transportation, spurred by the SAFETEA-LU transportation bill of 2005 (surely the most tortured acronym in Washington), invested heavily in “smart transportation” infrastructure like roadway sensors and digital overhead signs, with the aim of, for instance, directing traffic around congestion. Compared to any of the Web services, these systems are slow to react, have poor coverage, are inflexible, and are incredibly expensive. The Web services make collateral use of infrastructure–gathering data quietly from devices that were principally installed for other purposes.
  • A few weeks ago I spent a storm-drenched six hours in one of O’Hare Airport’s lower-ring terminals. As we waited for the storm to clear and flights to resume, we watched the data screen over the gate–the Virgil in our inferno–which periodically showed a weather map and updated our predicted departure time. It was clear to anyone with a smartphone, though, that the information it displayed, at the end of a long chain of data transmission and reformatting that went from weather services and the airline’s management system through the airport’s display system, was out of date. The flight updates arrived on the airline’s Web site 10 to 15 minutes before they appeared on the gate screen; the weather map, which showed our storm crossing the Mississippi River even as it appeared over the airfield, was a consistent hour behind. By following updates on the Internet, customers were able to subvert an otherwise-rigid and hierarchical information flow, finding data where it was freshest and most accurate among loosely-coupled services online. Software that can automatically seek out the highest-quality information, flexibly switching between sources, will in turn provide the highest-quality intelligence.

The API represents the world on the computers’ terms: human output squished constrained by programming conventions. Computers are on their way to participating in the world in human terms, accepting winding roads and handwritten signs. In other words, they’ll soon meet us halfway.

tags: , , ,