Who do you trust? You are surrounded by bots.

Preview of upcoming session "Who is Fake?" at the Strata Conference

By Lutz Finger 

In the Matrix, the idea of a computer algorithm determining what we think may seemed far-fetched. Really? Far-fetched? Let’s look at some numbers.

About half of all Americans get their news in digital form. This news is written up by journalists, half of whom at least partially source their stories from social media. They use tools to harvest the real time knowledge of 100,000 tweets per second and more.

But what if someone could influence those tools and create messages that look as though they were part of a common consensus? Or create the appearance of trending?

Have you ever heard of influencing algorithms? Every second, computer programs create about 300 new websites filled with content just to influence Google’s search algorithm. Spam bots get in to our email systems offering us Viagra and other things we never wanted to buy. Of course, these tactics are highly inefficient: for example, to find one potential buyer, the spam bot has to send 12.5 million emails.

But in time these bots made their way into social networks. In 2009 Twitter was flooded with spam messages, and 9% of all messages were classified as spam. Facebook does not publish numbers, but research in 2010 found that 2% of accounts tested were either corrupted or spam accounts. Some of these threats were all too real: for example, a third of all sex crimes were initiated through social networking sites. Twitter and Facebook eventually reacted and killed these forms of spam.

So far, so good? Unfortunately not! Soon a new breed of bot was born. These Bots 2.0 were social, and did not try to sell Viagra to millions who never would buy it anyhow. No, these Bots 2.0 were socially adapted bots. They reached out and wanted to get us to know us. Over 80% of us have received these kinds of unwanted friend requests and messages, and nearly 20% of us let them into our private life out of curiosity. They then start conversations that are so convincing that 30% of us are fooled, even after we are warned.

Once these Bots 2.0 become our friends, they try to influence us. Some start phishing attacks with success rates up to 70% (that’s right, 70%!). Others try to make us buy things. How do they do this? Take online reviews, for example. Some 22% of us trust them, but research shows that 30% to 40% of them are fake reviews from spammers. These Bots 2.0 can be powerful as they aim to influence the information we are consuming. They are easy to build but hard to detect. Enough of them could even start an Astroturf (e.g. fake grass roots) movement.

Can you spot these fakes?

Sometimes.

The best detection method is to know what kind of “influencing attack” to expect, and build machine learning approaches that can find and neutralize these Bots 2.0.  Some might tweet too regularly, others might send message updates that are too one-sided, and still others might have a social network that did not developed “normally”.  If seen as a whole group, one might spot coordinated attempts to influence.

In the years to come, we will see an arms race between networks trying to protect their integrity and others trying to get into these networks. This race has already outgrown the ordinary spammer. To influence what people are thinking has now become a part of modern warfare.

The quest for truth is more important then ever.

To learn more about Bots 2.0, come to the Strata session “Who is fake?”.  The session has been rescheduled for February 28th, 10:40am (Pacific Time), located in the Mission City Ballroom.

Lutz Finger is one of the authors of an upcoming book from O’Reilly on how to use social media data mining to improve businesses. Register for an early copy at MiningData.biz

tags: , , , , , ,