"Big Data and Cognitive Augmentation" entries
Like the Internet in 1994, virtual reality is about to cross the chasm from core technologists to the wider world.
When you’re an entrepreneur or investor struggling to bring a technology to market just a little before its time, being too early can feel exactly the same as being flat wrong. But with a bit more perspective, it’s clear that many of the hottest companies and products in today’s tech landscape are actually capitalizing on ideas that have been tried before — have, in some cases, been tackled repeatedly, and by very smart teams — but whose day has only now just arrived.
Virtual reality (VR) is one of those areas that has seduced many smart technologists in its long history, and its repeated commercial flameouts have left a lot of scar tissue in their wake. Despite its considerable ups and downs, though, the dream of VR has never died — far from it. The ultimate promise of the technology has been apparent for decades now, and many visionaries have devoted their careers to making it happen. But for almost 50 years, these dreams have outpaced the realities of price and performance.
To be fair, VR has come a long way in that time, though largely in specialized, under-the-radar domains that can support very high system costs and large installations; think military training and resource exploration. But the basic requirements for mass-market devices have never been met: low-power computing muscle; large, fast displays; and tiny, accurate sensors. Thanks to the smartphone supply chain, though, all of these components have evolved very rapidly in recent years — to the point where low-cost, high-quality, compact VR systems are now becoming available. Consumer VR really is coming on fast now, and things are getting very interesting. Read more…
The growing complexity of design and architecture will require a new definition of design foundations, practice, and theory.
Editor’s note: This is an excerpt by Matt Nish-Lapidus from our recent book Designing for Emerging Technologies, a collection of works by several authors and edited by Jon Follett. This excerpt is included in our curated collection of chapters from the O’Reilly Design library. Download a free copy of the Designing for the Internet of Things ebook here.Bruce Sterling wrote in Shaping Things that the world is becoming increasingly connected, and the devices by which we are connecting are becoming smarter and more self aware. When every object in our environment contains data collection, communication, and interactive technology, how do we as human beings learn how to navigate all of this new information? We need new tools as designers — and humans — to work with all of this information and the new devices that create, consume, and store it.
Today, there’s a good chance that your car can park itself. Your phone likely knows where you are. You can walk through the interiors of famous buildings on the web. Everything around us is constantly collecting data, running algorithms, calculating outcomes, and accumulating more raw data than we can handle.
We all carry minicomputers in our pockets, often more than one; public and private infrastructure collects terabytes of data every minute; and personal analytics has become so commonplace that it’s more conspicuous to not collect data about yourself than to record every waking moment. In many ways, we’ve moved beyond Malcolm McCullough’s ideas of ubiquitous computing put forth in Digital Ground and into a world in which computing isn’t only ubiquitous and invisible, but pervasive, constant, and deeply embedded in our everyday lives. Read more…
In the next decade, Year Zero will be how big data reaches everyone and will fundamentally change how we live.
Editor’s note: this post originally appeared on the author’s blog, Solve for Interesting. This lightly edited version is reprinted here with permission.
In 10 years, every human connected to the Internet will have a timeline. It will contain everything we’ve done since we started recording, and it will be the primary tool with which we administer our lives. This will fundamentally change how we live, love, work, and play. And we’ll look back at the time before our feed started — before Year Zero — as a huge, unknowable black hole.
This timeline — beginning for newborns at Year Zero — will be so intrinsic to life that it will quickly be taken for granted. Those without a timeline will be at a huge disadvantage. Those with a good one will have the tricks of a modern mentalist: perfect recall, suggestions for how to curry favor, ease maintaining friendships and influencing strangers, unthinkably higher Dunbar numbers — now, every interaction has a history.
This isn’t just about lifelogging health data, like your Fitbit or Jawbone. It isn’t about financial data, like Mint. It isn’t just your social graph or photo feed. It isn’t about commuting data like Waze or Maps. It’s about all of these, together, along with the tools and user interfaces and agents to make sense of it.
Every decade or so, something from military or enterprise technology finds its way, bent and twisted, into the mass market. The client-server computer gave us the PC; wide-area networks gave us the consumer web; pagers and cell phones gave us mobile devices. In the next decade, Year Zero will be how big data reaches everyone. Read more…
Practical machine-learning applications and strategies from experts in active learning.
What do you call a practice that most data scientists have heard of, few have tried, and even fewer know how to do well? It turns out, no one is quite certain what to call it. In our latest free report Real-World Active Learning: Applications and Strategies for Human-in-the-Loop Machine Learning, we examine the relatively new field of “active learning” — also referred to as “human computation,” “human-machine hybrid systems,” and “human-in-the-loop machine learning.” Whatever you call it, the field is exploding with practical applications that are proving the efficiency of combining human and machine intelligence.
Learn from the expertsThrough in-depth interviews with experts in the field of active learning and crowdsource management, industry analyst Ted Cuzzillo reveals top tips and strategies for using short-term human intervention to actively improve machine models. As you’ll discover, the point at which a machine model fails is precisely where there’s an opportunity to insert — and benefit from — human judgment.
- When active learning works best
- How to manage crowdsource contributors (including expert-level contributors)
- Basic principles of labeling data
- Best practice methods for assessing labels
- When to skip the crowd and mine your own data
Explore real-world examples
This report gives you a behind-the-scenes look at how human-in-the-loop machine learning has helped improve the accuracy of Google Maps, match business listings at GoDaddy, rank top search results at Yahoo!, refer relevant job postings to people on LinkedIn, identify expert-level contributors using the Quizz recruitment method, and recommend women’s clothing based on customer and product data at Stitch Fix. Read more…
A look at a few ways humans mesh with the rest of our data systems.
Here’s a look at a few of the ways that humans — still the ultimate data processors — mesh with the rest of our data systems: how computational power can best produce true cognitive augmentation.
Deciding betterOver the past decade, we fitted roughly a quarter of our species with sensors. We instrumented our businesses, from the smallest market to the biggest factory. We began to consume that data, slowly at first. Then, as we were able to connect data sets to one another, the applications snowballed. Now that both the front office and the back office are plugged into everything, business cares. A lot.
While early adopters focused on sales, marketing, and online activity, today, data gathering and analysis is ubiquitous. Governments, activists, mining giants, local businesses, transportation, and virtually every other industry lives by data. If an organization isn’t harnessing the data exhaust it produces, it’ll soon be eclipsed by more analytical, introspective competitors that learn and adapt faster.
Whether we’re talking about a single human made more productive by a smartphone-turned-prosthetic-brain, or a global organization gaining the ability to make more informed decisions more quickly, ultimately, Strata + Hadoop World has become about deciding better.
What does it take to make better decisions? How will we balance machine optimization with human inspiration, sometimes making the best of the current game and other times changing the rules? Will machines that make recommendations about the future based on the past reduce risk, raise barriers to innovation, or make us vulnerable to improbable Black Swans because they mistakenly conclude that tomorrow is like yesterday, only more so? Read more…
Rajiv Maheswaran talks about the tools and techniques required to analyze new kinds of sports data.
Many data scientists are comfortable working with structured operational data and unstructured text. Newer techniques like deep learning have opened up data types like images, video, and audio.
Other common data sources are garnering attention. With the rise of mobile phones equipped with GPS, I’m meeting many more data scientists at start-ups and large companies who specialize in spatio-temporal pattern recognition. Analyzing “moving dots” requires specialized tools and techniques.
A few months ago, I sat down with Rajiv Maheswaran founder and CEO of Second Spectrum, a company that applies analytics to sports tracking data. Maheswaran talked about this new kind of data and the challenge of finding patterns:
“It’s interesting because it’s a new type of data problem. Everybody knows that big data machine learning has done a lot of stuff in structured data, in photos, in translation for language, but moving dots is a very new kind of data where you haven’t figured out the right feature set to be able to find patterns from. There’s no language of moving dots, at least not that computers understand. People understand it very well, but there’s no computational language of moving dots that are interacting. We wanted to build that up, mostly because data about moving dots is very, very new. It’s only in the last five years, between phones and GPS and new tracking technologies, that moving data has actually emerged.”
Solutions to a number of problems must be found to unlock PAPI value.
In November, the first International Conference on Predictive APIs and Apps will take place in Barcelona, just ahead of Strata Barcelona. This event will bring together those who are building intelligent web services (sometimes called Machine Learning as a Service) with those who would like to use these services to build predictive apps, which, as defined by Forrester, deliver “the right functionality and content at the right time, for the right person, by continuously learning about them and predicting what they’ll need.”
This is a very exciting area. Machine learning of various sorts is revolutionizing many areas of business, and predictive services like the ones at the center of predictive APIs (PAPIs) have the potential to bring these capabilities to an even wider range of applications. I co-founded one of the first companies in this space (acquired by Salesforce in 2012), and I remain optimistic about the future of these efforts. But the field as a whole faces a number of challenges, for which the answers are neither easy nor obvious, that must be addressed before this value can be unlocked.
In the remainder of this post, I’ll enumerate what I see as the most pressing issues. I hope that the speakers and attendees at PAPIs will keep these in mind as they map out the road ahead. Read more…
True artificial intelligence will require rich models that incorporate real-world phenomena.
In my last post, we saw that AI means a lot of things to a lot of people. These dueling definitions each have a deep history — ok fine, baggage — that has massed and layered over time. While they’re all legitimate, they share a common weakness: each one can apply perfectly well to a system that is not particularly intelligent. As just one example, the chatbot that was recently touted as having passed the Turing test is certainly an interlocutor (of sorts), but it was widely criticized as not containing any significant intelligence.
Let’s ask a different question instead: What criteria must any system meet in order to achieve intelligence — whether an animal, a smart robot, a big-data cruncher, or something else entirely? Read more…