What do you call a practice that most data scientists have heard of, few have tried, and even fewer know how to do well? It turns out, no one is quite certain what to call it. In our latest free report Real-World Active Learning: Applications and Strategies for Human-in-the-Loop Machine Learning, we examine the relatively new field of “active learning” — also referred to as “human computation,” “human-machine hybrid systems,” and “human-in-the-loop machine learning.” Whatever you call it, the field is exploding with practical applications that are proving the efficiency of combining human and machine intelligence.
Learn from the expertsThrough in-depth interviews with experts in the field of active learning and crowdsource management, industry analyst Ted Cuzzillo reveals top tips and strategies for using short-term human intervention to actively improve machine models. As you’ll discover, the point at which a machine model fails is precisely where there’s an opportunity to insert — and benefit from — human judgment.
- When active learning works best
- How to manage crowdsource contributors (including expert-level contributors)
- Basic principles of labeling data
- Best practice methods for assessing labels
- When to skip the crowd and mine your own data
Explore real-world examples
This report gives you a behind-the-scenes look at how human-in-the-loop machine learning has helped improve the accuracy of Google Maps, match business listings at GoDaddy, rank top search results at Yahoo!, refer relevant job postings to people on LinkedIn, identify expert-level contributors using the Quizz recruitment method, and recommend women’s clothing based on customer and product data at Stitch Fix.
As explained by Stitch Fix’s chief algorithms and analytics officer, Eric Colson:
“Stitch Fix’s expert merchandisers evaluate each new piece of clothing and encode its attributes, both subjective and objective, into structured data, such as color, fit, style, material, pattern, silhouette, brand, price, and trendiness. These attributes are then compared with a customer profile, and the machine produces recommendations based on the model.
“But when the time comes to recommend merchandise to the customer, the machine can’t possibly make the final call. This is where Stitch Fix stylists step in. Stitch Fix hands off a final selection of recommendations to one of roughly 1,000 human stylists, each of whom serves a set of customers.”
In the report, Cuzzillo takes us from fashion recommendations to mapping off-road locations at Google:
“The algorithms collect data from satellite, aerial, and Google’s Street View images, extracting such data as street numbers, speed limits, and points of interest. Yet even at Google, algorithms only get you to a certain point, and then humans need to step in to manually check and correct the data. Google also takes advantage of help from citizens — a different take on “crowdsourcing” — who give input using Google’s Map Maker program to contribute data for off-road locations where Street View cars can’t drive.”
The report also dives into the closely related trend of crowdsourcing — a critical way to quickly label hundreds or even thousands of items that ultimately feed back into an algorithm to improve its performance.