Emotional AI: The human side of machine learning

Insight from a Strata Santa Clara 2014 session

By Kira Radinsky

When you think about what goes into winning a Nobel Prize in a field like economics, it’s a lot like machine learning. In order to make a breakthrough, you need to identify an interesting theory for explaining the world, test your theory in practice to see if it holds up, and if it does, you’ve got a potential winner. The bigger and more significant the issue addressed by your theory, the more likely you are to win the prize.

In the world of business, there’s no bigger issue than helping a company be more successful, and that usually hinges on helping it deliver its products to those that need them. This is why I like to describe my company SalesPredict as helping our customers win the Nobel Prize in business, if such a thing existed.

In a nutshell, the SalesPredict solution helps a company identify its most likely prospects, and determines the actions that their salespeople should pursue to win the deal. To accomplish this, we’ve built a platform that consumes data from hundreds of public and private data sources to produce a machine learning model of a given prospect’s likeliness to buy. To make this actionable by salespeople, we boil everything down to a single numerical or letter score that they can sort on, and a set of actions that they should take for each prospect. But invariably, they want to know “Why?”

Enter Emotional AI

It turns out that many of the most interesting and challenging problems in machine learning aren’t about algorithms or implementation, but rather are those relating to the human side of machine learning, or what I like to call “emotional AI.” And if you think I’m about to start talking about the need for data scientists to develop soft-skills, I’m not. These are challenges demanding deeply technical solutions.

The fact is, many applications of machine learning today do not take into account the most basic human needs of its ultimate users. To be effective, machine learning must address human perception and the emotional nature of our response upon seeing a proscribed statistical outcome.

One example of an emotional AI problem that we encountered early on is helping a user understand why a particular sales prospect was rated an ‘A’ while another a ‘B’. While the “reasons” for these relative rankings are expressions of the data collected by our system (i.e. the “features” in machine learning parlance), we couldn’t just expose these features to end-users. Given the complexity of our models, the features and their relationships to the end predictions would not likely be intuitive, or worse, might be counter intuitive. While some practitioners might advocate just making up the explanations, or making them trivial, our users were telling us that a degree of transparency was necessary in order for them to begin trusting our system.

Modeling Human Perception

Taking emotional human factors into account leads to a very different result than traditional data science might suggest. Consider in our case what happens when a user rejects a recommendation we make, for example by skipping or rejecting an action that we suggest for them. The traditional way to interpret this signal is as an indication that the recommendation itself was not “good” and to tune the model to suppress similar results.

Considering human perception and emotion suggests an altogether different way of interpreting this signal: Perhaps it is the explanation, rather than the recommendation itself, that was faulty. In other words, we failed to get the user excited about calling this prospect! This view of the world is currently novel in the machine learning word, but is consistent with research in other fields, for example the notion of irrational (or emotional) decision-making from behavioral economics.

Without going into the gory technical details, one solution we identified to this problem was to independently model the user’s of our explanations using, you guessed it, machine learning. To do this we use a second recommender to model the user’s perception of our recommendations, and use reinforcement learning techniques to ensure that the user’s feedback helps evolve the perception model.

Building a Predictive Business

If you’re interested in learning more about emotional AI or how you can apply these and other techniques to build a predictive, and thus more effective, businesses, I encourage you to follow me on Twitter: @kiraradinsky or my blog. I’ll be taking on these and other interesting technical topics over the upcoming weeks and would love to have you join the discussion.

I recently spoke on emotional AI and other challenges faced by The Predictive Business at my recent talk at Strata 2014. You can watch a summary of my talk here:

Related Resources

 

tags: ,