"Industrial Internet" entries

The software-enabled cars of the near-future (industrial Internet links)

Ford's OpenXC platform opens up real-time drivetrain data.

OpenXC (Ford Motor) — Ford has taken a significant step in turning its cars into platforms for innovative developers. OpenXC goes beyond the Ford Developer Program, which opens up audio and navigation features, and lets developers get their hands on drivetrain and auto-body data via the on-board diagnostic port. Once you’ve built the vehicle interface from open-source parts, you can use outside intelligence — code running on an Android device — to analyze vehicle data.

Of course, as outside software gets closer to the drivetrain, security becomes more important. OpenXC is read-only at the moment, and it promises “proper hardware isolation to ensure you can’t ‘brick’ your $20,000 investment in a car.”

Still, there are plenty of sophisticated data-machine tieups that developers could build with read-only access to the drivetrain: think of apps that help drivers get better fuel economy by changing their acceleration or, eventually, apps that optimize battery cycles in electric vehicles.

Drivers with Full Hands Get a Backup: The Car (New York Times) — John Markoff takes a look at automatic driver aides — tools like dynamic cruise control and collision-avoidance warnings that represent something of a middle ground between driverless cars and completely manual vehicles. Some features like these have been around for years, many of them using ultrasonic proximity sensors. But some of these are special, and illustrative of an important element of the industrial Internet: they rely on computer vision like Google’s driverless car. Software is taking over some kinds of machine intelligence that had previously resided in specialized hardware, and it’s creating new kinds of intelligence that hadn’t existed in cars at all. Read more…

Defining the industrial Internet

Some broad thoughts on characteristics that define the industrial Internet field.

We’ve been collecting threads on what the industrial Internet means since last fall. More case studies, company profiles and interviews will follow, but here’s how I’m thinking about the framework of the industrial Internet concept. This will undoubtedly continue to evolve as I hear from more people who work in the area and from our brilliant readers.

The crucial feature of the industrial Internet is that it installs intelligence above the level of individual machines — enabling remote control, optimization at the level of the entire system, and sophisticated machine-learning algorithms that can work extremely accurately because they take into account vast quantities of data generated by large systems of machines as well as the external context of every individual machine. Additionally, it can link systems together end-to-end — for instance, integrating railroad routing systems with retailer inventory systems in order to anticipate deliveries accurately.

In other words, it’ll look a lot like the Internet — bringing industry into a new era of what my colleague Roger Magoulas calls “promiscuous connectivity.” Read more…

Industrial Internet links: smart cities return, pilotless commercial aircraft, and more

Small-scale smart city projects; the industrial Internet as part of big data; a platform for smart buildings

Mining the urban data (The Economist) — The “smart city” hype cycle has moved beyond ambitious top-down projects and has started to produce useful results: real-time transit data in London, smart meters in Amsterdam. The next step, if Singapore has its way, may be real-time optimization of things like transit systems.

This is your ground pilot speaking (The Economist) — Testing is underway to bring drone-style remotely-piloted aircraft into broader civilian use. One challenge: building in enough on-board intelligence to operate the plane safely if radio links go down.

How GE’s over $100 billion investment in ‘industrial internet’ will add $15 trillion to world GDP (Economic Times) — A broad look at what the industrial Internet means in the context of big data, including interviews with Tim O’Reilly, DJ Patil and Kenn Cukier. (Full disclosure: GE and O’Reilly are collaborating on an industrial Internet series.)

Defining a Digital Network for Building-to-Cloud Efficiency (GreentechEnterprise) — “Eventually, the building will become an IT platform for managing energy a bit like we manage data today. But to get there, you don’t just have to make fans, chillers, lights, backup generators, smart load control circuits and the rest of a building’s hardware smart enough to act as IT assets. A platform — software that ties these disparate devices into the multiple, overlapping technical and economic models that help humans decide how to manage their building — is also required.” Read more…

The industrial Internet from a startup perspective

3Scan is building an Internet-connected 3D microscope as a service

I don’t remember when I first met Todd Huffman, but for the longest time I seemed to run into him in all kinds of odd places, but mostly in airport waiting areas as our nomadic paths intersected randomly and with surprising frequency. We don’t run into each other in airports anymore because Todd has settled in San Francisco to build 3Scan, his startup at the nexus of professional maker, science as a service, and the industrial Internet. My colleague Jon Bruner has been talking to airlines, automobile manufacturers, and railroads to get their industrial Internet stories. I recently caught up with Todd to see what the industrial Internet looks like from the perspective of an innovative startup.

First off, I’m sure he wouldn’t use the words “industrial Internet” to describe what he and his team are doing, and it might be a little bit of a stretch to categorize 3Scan that way. But I think they are an exemplar of many of the core principles of the meme and it’s interesting to think about them in that frame. They are building a device that produces massive amounts of data; a platform to support its complex analysis, distribution, and interoperation; and APIs to observe its operation and remotely control it.

Do a Google image search for “pathologist” and you’ll find lots and lots of pictures of people in white lab coats sitting in front of microscopes. This is a field whose primary user interface hasn’t changed in 200 years. This is equally true for a wide range of scientific research. 3Scan is setting out to change that by simplifying the researcher’s life while making 3D visualization and numerical analysis of the features of whole tissue samples readily available. Read more…

Three lessons for the industrial Internet

Simplicity, generativity and robustness shaped the Internet. Tim O'Reilly explains how they can also define the industrial Internet.

The map of the industrial Internet is still being drawn, which means the decisions we’re making about it now will determine the extent to which it shapes our world.

With that as a backdrop, Tim O’Reilly (@timoreilly) used his presentation at the recent Minds + Machines event to urge the industrial Internet’s architects to apply three key lessons from the Internet’s evolution. These three characteristics gave the Internet its ability to be open, to scale and to adapt — and if these same attributes are applied to the industrial Internet, O’Reilly believes this growing domain has the ability to “change who we are.”

Full video and slides from O’Reilly’s talk are embedded at the end of this piece. You’ll find a handful of insights from the presentation outlined below.

Lesson 1: Simplicity

“Standardize as little as possible, but as much as is needed so the system is able to evolve,” O’Reilly said.

To illustrate this point, O’Reilly drew a line between the simplicity and openness of TCP/IP, the creation and growth of the World Wide Web, and the emergence of Google.

“The Internet is fundamentally permission-less,” O’Reilly said. “Those of us who were early pioneers on the web, all we had to do was download the software and start playing. That’s how the web grew organically. So much more came from that.” Read more…

Interoperating the industrial Internet

If we're going to build useful applications on top of the industrial Internet, we must ensure the components interoperate.

One of the most interesting points made in GE’s “Unleashing the Industrial Internet” event was GE CEO Jeff Immelt’s statement that only 10% of the value of Internet-enabled products is in the connectivity layer; the remaining 90% is in the applications that are built on top of that layer. These applications enable decision support, the optimization of large scale systems (systems “above the level of a single device,” to use Tim O’Reilly’s phrase), and empower consumers.

Given the jet engine that was sitting on stage, it’s worth seeing how far these ideas can be pushed. Optimizing a jet engine is no small deal; Immelt said that the engine gained an extra 5-10% efficiency through software, and that adds up to real money. The next stage is optimizing the entire aircraft; that’s certainly something GE and its business partners are looking into. But we can push even harder: optimize the entire airport (don’t you hate it when you’re stuck on a jet waiting for one of those trucks to push you back from the gate?). Optimize the entire air traffic system across the worldwide network of airports. This is where we’ll find the real gains in productivity and efficiency.

So it’s worth asking about the preconditions for those kinds of gains. It’s not computational power; when you come right down to it, there aren’t that many airports, aren’t that many flights in the air at one time. There are something like 10,000 flights in the air at one time, worldwide; and in these days of big data, and big distributed systems, that’s not a terribly large number. It’s not our ability to write software; there would certainly be some tough problems to solve, but certainly nothing as difficult as, say, searching the entire web and returning results in under a second. Read more…

To eat or be eaten?

What's interesting isn't software as a thing in itself, but software as a component of some larger system.

One of Marc Andreessen’s many accomplishments was the seminal essay “Why Software is Eating the World.” In it, the creator of Mosaic and Netscape argues for his investment thesis: everything is becoming software. Music and movies led the way, Skype makes the phone company obsolete, and even companies like Fedex and Walmart are all about software: their core competitive advantage isn’t driving trucks or hiring part-time employees, it’s the software they’ve developed for managing their logistics.

I’m not going to argue (much) with Marc, because he’s mostly right. But I’ve also been wondering why, when I look at the software world, I get bored fairly quickly. Yeah, yeah, another language that compiles to the JVM. Yeah, yeah, the Javascript framework of the day. Yeah, yeah, another new component in the Hadoop ecosystem. Seen it. Been there. Done that. In the past 20 years, haven’t we gained more than the ability to use sophisticated JavaScript to display ads based on a real-time prediction of the user’s next purchase?

When I look at what excites me, I see a much bigger world than just software. I’ve already argued that biology is in the process of exploding, and the biological revolution could be even bigger than the computer revolution. I’m increasingly interested in hardware and gadgetry, which I used to ignore almost completely. And we’re following the “Internet of Things” (and in particular, the “Internet of Very Big Things”) very closely. I’m not saying that software is irrelevant or uninteresting. I firmly believe that software will be a component of every (well, almost every) important new technology. But what grabs me these days isn’t software as a thing in itself, but software as a component of some larger system. The software may be what makes it work, but it’s not about the software. Read more…

New data competition tackles airline delays

Airlines face a very costly data problem. A new competition looks to crack it.

Jeff Immelt and a GE jet engine

Jeff Immelt speaking next to a GEnx jet engine at Minds + Machines: Unleashing the Industrial Internet.

The scenario is familiar: a flight leaves the gate in New York on time, sits in a runway queue for 45 minutes, gets a fortuitous reroute over Illinois, and makes it to San Francisco ahead of schedule — only to wait on the terminal apron, engines running, for 15 minutes while a gate and crew materialize. The uncertainty irritates passengers and is costly for the airline, which burns extra fuel, pays extra wages, and has to rebook passengers and crew at the last minute.

A new competition run by Kaggle and sponsored by GE and Alaska Airlines offers $500,000 to data scientists — professional or enthusiast — who can accurately predict when a flight will land and arrive at the gate given a slew of data on weather, flight plans, air-traffic control and past flight performance.

Called GE Flight Quest, it’s tied to the industrial Internet — the idea that networked machines and high-level software above them will drive the next generation of efficiency improvements in complicated systems like airlines, power grids and freight carriers.

Predicting when a plane will arrive is trickier than it sounds because it’s subject to lots of independent, real-time influences. Knowing about the runway queues, reroutings and arrival restrictions in advance would make it possible to figure out exactly when a flight will arrive before it takes off, but the factors that delay most flights — weather, congestion and maintenance — shift constantly and interact in complex ways. Read more…

Software that keeps an eye on Grandma

Networked sensors and machine learning make it easy to see when things are out of the ordinary.

Much of health care — particularly for the elderly — is about detecting change, and, as the mobile health movement would have it, computers are very good at that. Given enough sensors, software can model an individual’s behavior patterns and then figure out when things are out of the ordinary — when gait slows, posture stoops or bedtime moves earlier.

Technology already exists that lets users set parameters for households they’re monitoring. Systems are available that send an alert if someone leaves the house in the middle of the night or sleeps past a preset time. Those systems involve context-specific hardware (i.e., a bed-pressure sensor) and conscientious modeling (you have to know what time your grandmother usually wakes up).

The next step would be a generic system. One that, following simple setup, would learn the habits of the people it monitors and then detect the sorts of problems that beset elderly people living alone — falls, disorientation, and so forth — as well as more subtle changes in behavior that could signal other health problems.

A group of researchers from Austria and Turkey has developed just such a system, which they presented at the IEEE’s Industrial Electronics Society meeting in Montreal in October.*

Activity as surmised in different rooms by the researchers' machine-learning algorithms
Activity as surmised in different rooms by the researchers’ machine-learning algorithms. Source: “Activity Recognition Using a Hierarchical Model.”

In their approach, the researchers train a machine-learning algorithm with several days of routine household activity using door and motion sensors distributed through the living space. The sensors aren’t associated with any particular room at the outset: their software algorithmically determines the relative positions of the sensors, then classifies the rooms that they’re in based on activity patterns over the course of the day. Read more…

Two crucial questions for the smart grid

Who will own the data the industrial Internet generates, and how will users fare under an onslaught of optimization problems?

In a lively panel discussion at last week’s IEEE Industrial Electronics Society meeting in Montreal, two questions related to the smart grid (the prospective electrical distribution system that will set prices dynamically and let consumers sell electricity to other users easily) arose that I think we’ll hear much more about in coming years:

Who will own the data? One important feature of the smart grid will be integration with layers of software at the level of individual machines attached to it — everything from industrial furnaces to home clothes dryers. The idea is that these devices will constantly send data about their usage into a variety of optimization schemes that seek to balance energy usage by adjusting prices and advising power sources on expected demand.

If this data is valuable — and the smart grid’s proponents suggest it is — then someone will find value in capturing it. Who will claim it? Manufacturers might require licenses to decode data from their devices, and data clearinghouses might require that manufacturers license their standards in order to participate. Squabbles over data ownership could delay adoption and hurt systemwide gains.

Industrial users have presumably addressed this question in various ways. Readers who can put their hands on an industrial data usage agreement or two are welcome to send them my way.

Will users be overloaded by decision making? The smart grid promises to balance demand and let flexible users save money through dynamic pricing. Large electricity users already enjoy discounts for electricity at off-peak hours and adjust their work schedules accordingly, but this kind of pricing will soon be available to consumers, and at highly dynamic levels — imagine a display in your laundry room that tells you what it will cost to wash your clothes now and predicts the cost of washing them overnight instead. If the laundry isn’t urgent, the overnight cycle might be an easy choice, but consumers could be besieged by trade offs to which they’re nearly indifferent. Read more…