Computational power and cognitive augmentation

A look at a few ways humans mesh with the rest of our data systems.

Editor’s note: this is an excerpt from our new report Data: Emerging Trends and Technologies, by Alistair Croll. Download the free report here.

Here’s a look at a few of the ways that humans — still the ultimate data processors — mesh with the rest of our data systems: how computational power can best produce true cognitive augmentation.

Deciding better

Over the past decade, we fitted roughly a quarter of our species with sensors. We instrumented our businesses, from the smallest market to the biggest factory. We began to consume that data, slowly at first. Then, as we were able to connect data sets to one another, the applications snowballed. Now that both the front office and the back office are plugged into everything, business cares. A lot.

While early adopters focused on sales, marketing, and online activity, today, data gathering and analysis is ubiquitous. Governments, activists, mining giants, local businesses, transportation, and virtually every other industry lives by data. If an organization isn’t harnessing the data exhaust it produces, it’ll soon be eclipsed by more analytical, introspective competitors that learn and adapt faster.

Whether we’re talking about a single human made more productive by a smartphone-turned-prosthetic-brain, or a global organization gaining the ability to make more informed decisions more quickly, ultimately, Strata + Hadoop World has become about deciding better.

What does it take to make better decisions? How will we balance machine optimization with human inspiration, sometimes making the best of the current game and other times changing the rules? Will machines that make recommendations about the future based on the past reduce risk, raise barriers to innovation, or make us vulnerable to improbable Black Swans because they mistakenly conclude that tomorrow is like yesterday, only more so?

Designing for interruption

Tomorrow’s interfaces won’t be about mobility, or haptics, or augmented reality (AR), or HUDs, or voice activation. I mean, they will be, but that’s just the icing. They’ll be about interruption.

In his book Consilience, E. O. Wilson said: “We are drowning in information…the world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely.” Only it won’t be people doing that synthesis; it’ll be a hybrid of humans and machines. Because after all, the right information at the right time changes your life.

That interruption will take many forms — a voice on a phone, a buzz on a bike handlebar, a heads-up display over actual heads. But behind it is a tremendous amount of context that helps us to decide better.

Right now, there are three companies on the planet that could do this. Microsoft’s Cortana, Google’s Now, and Apple’s Siri are all starting down the path to prosthetic brains. A few others — Samsung, Facebook, and Amazon — might try to make it happen, too. When it finally does happen, it’ll be the fundamental shift of the 21st century, the way machines were in the 19th and computers were in the 20th, because it will create a new species. Call it Homo Conexus.

Add iBeacons and health data to things like GPS, your calendar, crowdsourced map congestion, movement, and temperature data, etc., and machines will be more intimate, and more diplomatic, than even the most polished personal assistants.

These agents will empathize better and far more quickly than humans can. Consider two users, Mike and Tammy. Mike hates being interrupted: when his device interrupts, and it senses his racing pulse and the stress tones in his voice, it will stop. When Tammy’s device interrupts, and her pupils dilate in technological lust, it will interrupt more often. Factor in heart rate, galvanic response, and multiply by a million users with a thousand data points a day, and it’s a simple baby step toward the human-machine hybrid.

Designing for interruption implies fundamentally rethinking many of our networks and applications.

We’ve seen examples of contextual push models in the past. Doc Searls’ suggestion of Vendor Relationship Management (VRM), in which consumers control what they receive by opting in to that in which they’re interested, was a good idea. Those plans came before their time; today, however, a huge and still-increasing percentage of the world population has some kind of push-ready mobile device and a data plan.

The rise of design-for-interruption might also lead to an interruption “arms race” of personal agents trying to filter out all but the most important content, and third-party engines competing to be the most important thing in your notification center.

In discussing this with Jon Bruner, he pointed out that some of these changes will happen over time, as we make peace with our second brains:

“There’s a process of social refinement that takes place when new things become widespread enough to get annoying. Everything from cars — for which traffic rules had to be invented after a couple years of gridlock — to cell phones (‘guy talking loudly in a public place’ is, I think, a less common nuisance than it used to be) have threatened to overload social convention when they became universal. There’s a strong reaction, and then a reengineering of both convention and behavior results in a moderate outcome.”

This trend leads to fascinating moral and ethical questions:

  • Will a connected, augmented species quickly leave the disconnected in its digital dust, the way humans outstripped Neanderthals?
  • What are the ethical implications of this?
  • Will such brains make us more vulnerable?
  • Will we rely on them too much?
  • Is there a digital equivalent of eminent domain? Or simply the equivalent of an Amber Alert?
  • What kind of damage might a powerful and politically motivated attacker wreak on a targeted nation, and how would this affect productivity or even cost lives?
  • How will such machines “dream” and work on sense-making and garbage collection in the background the way humans do as they sleep?
  • What interfaces are best for human-machine collaboration?
  • And what protections of privacy, unreasonable search and seizure, and legislative control should these prosthetic brains enjoy?

There are also fascinating architectural changes. From a systems perspective, designing for interruption implies fundamentally rethinking many of our networks and applications, too. Systems architecture shifts from waiting and responding to pushing out “smart” interruptions based on data and context.

tags: , , , ,