"artificial intelligence" entries

Artificial intelligence?

AI scares us because it could be as inhuman as humans.


Elon Musk started a trend. Ever since he warned us about artificial intelligence, all sorts of people have been jumping on the bandwagon, including Stephen Hawking and Bill Gates.

Although I believe we’ve entered the age of postmodern computing, when we don’t trust our software, and write software that doesn’t trust us, I’m not particularly concerned about AI. AI will be built in an era of distrust, and that’s good. But there are some bigger issues here that have nothing to do with distrust.

What do we mean by “artificial intelligence”? We like to point to the Turing test; but the Turing test includes an all-important Easter Egg: when someone asks Turing’s hypothetical computer to do some arithmetic, the answer it returns is incorrect. An AI might be a cold calculating engine, but if it’s going to imitate human intelligence, it has to make mistakes. Not only can it make mistakes, it can (indeed, must be) be deceptive, misleading, evasive, and arrogant if the situation calls for it.

That’s a problem in itself. Turing’s test doesn’t really get us anywhere. It holds up a mirror: if a machine looks like us (including mistakes and misdirections), we can call it artificially intelligent. That begs the question of what “intelligence” is. We still don’t really know. Is it the ability to perform well on Jeopardy? Is it the ability to win chess matches? These accomplishments help us to define what intelligence isn’t: it’s certainly not the ability to win at chess or Jeopardy, or even to recognize faces or make recommendations. But they don’t help us to determine what intelligence actually is. And if we don’t know what constitutes human intelligence, why are we even talking about artificial intelligence? Read more…

Comments: 4

Our future sits at the intersection of artificial intelligence and blockchain

The O'Reilly Radar Podcast: Steve Omohundro on AI, cryptocurrencies, and ensuring a safe future for humanity.


Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come.

I met up with Possibility Research president Steve Omohundro at our Bitcoin & the Blockchain Radar Summit to talk about an interesting intersection: artificial intelligence (AI) and blockchain/cryptocurrency technologies. This Radar Podcast episode features our discussion about the role cryptocurrency and blockchain technologies will play in the future of AI, Omohundro’s Self Aware Systems project that aims to ensure intelligent technologies are beneficial for humanity, and his work on the Pebble cryptocurrency.

Synthesizing AI and crypto-technologies

Bitcoin piqued Omohundro’s interest from the very start, but his excitement built as he started realizing the disruptive potential of the technology beyond currency — especially the potential for smart contracts. He began seeing ways the technology will intersect with artificial intelligence, the area of focus for much of his work:

I’m very excited about what’s happening with the cryptocurrencies, particularly Ethereum. I would say Ethereum is the most advanced of the smart contracting ideas, and there’s just a flurry of insights, and people are coming up every week with, ‘Oh we could use it to do this.’ We could have totally autonomous corporations running on the blockchain that copy what Uber does, but much more cheaply. It’s like, ‘Whoa what would that do?’

I think we’re in a period of exploration and excitement in that field, and it’s going to merge with the AI systems because programs running on the blockchain have to connect to the real world. You need to have sensors and actuators that are intelligent, have knowledge about the world, in order to integrate them with the smart contracts on the blockchain. I see a synthesis of AI and cryptocurrencies and crypto-technologies and smart contracts. I see them all coming together in the next couple of years.

Read more…

Comment: 1

Exploring methods in active learning

Tips on how to build effective human-machine hybrids, from crowdsourcing expert Adam Marcus.

15146_ORM_Webcast_ad(archived)In a recent O’Reilly webcast, “Crowdsourcing at GoDaddy: How I Learned to Stop Worrying and Love the Crowd,” Adam Marcus explains how to mitigate common challenges of managing crowd workers, how to make the most of human-in-the-loop machine learning, and how to establish effective and mutually rewarding relationships with workers. Marcus is the director of data on the Locu team at GoDaddy, where the “Get Found” service provides businesses with a central platform for managing their online presence and content.

In the webcast, Marcus uses practical examples from his experience at GoDaddy to reveal helpful methods for how to:

  • Offset the inevitability of wrong answers from the crowd
  • Develop and train workers through a peer-review system
  • Build a hierarchy of trusted workers
  • Make crowd work inspiring and enable upward mobility

What to do when humans get it wrong

It turns out there is a simple way to offset human error: redundantly ask people the same questions. Marcus explains that when you ask five different people the same question, there are some creative ways to combine their responses, and use a majority vote. Read more…


Beyond AI: artificial compassion

If what we are trying to build is artificial minds, intelligence might be the smaller, easier part.

LIght_of_ideas_Saad-Faruque_FlickrWhen we talk about artificial intelligence, we often make an unexamined assumption: that intelligence, understood as rational thought, is the same thing as mind. We use metaphors like “the brain’s operating system” or “thinking machines,” without always noticing their implicit bias.

But if what we are trying to build is artificial minds, we need only look at a map of the brain to see that in the domain we’re tackling, intelligence might be the smaller, easier part.

Maybe that’s why we started with it.

After all, the rational part of our brain is a relatively recent add-on. Setting aside unconscious processes, most of our gray matter is devoted not to thinking, but to feeling.

There was a time when we deprecated this larger part of the mind, as something we should either ignore or, if it got unruly, control.

But now we understand that, as troublesome as they may sometimes be, emotions are essential to being fully conscious. For one thing, as neurologist Antonio Damasio has demonstrated, we need them in order to make decisions. A certain kind of brain damage leaves the intellect unharmed, but removes the emotions. People with this affliction tend to analyze options endlessly, never settling on a final choice. Read more…


Artificial intelligence: summoning the demon

We need to understand our own intelligence is competition for our artificial, not-quite intelligences.


A few days ago, Elon Musk likened artificial intelligence (AI) to “summoning the demon.” As I’m sure you know, there are many stories in which someone summons a demon. As Musk said, they rarely turn out well.

There’s no question that Musk is an astute student of technology. But his reaction is misplaced. There are certainly reasons for concern, but they’re not Musk’s.

The problem with AI right now is that its achievements are greatly over-hyped. That’s not to say those achievements aren’t real, but they don’t mean what people think they mean. Researchers in deep learning are happy if they can recognize human faces with 80% accuracy. (I’m skeptical about claims that deep learning systems can reach 97.5% accuracy; I suspect that the problem has been constrained some way that makes it much easier. For example, asking “is there a face in this picture?” or “where is the face in this picture?” is much different from asking “what is in this picture?”) That’s a hard problem, a really hard problem. But humans recognize faces with nearly 100% accuracy. For a deep learning system, that’s an almost inconceivable goal. And 100% accuracy is orders of magnitude harder than 80% accuracy, or even 97.5%. Read more…

Comments: 8

Small brains, big data

How neuroscience is benefiting from distributed computing — and how computing might learn from neuroscience.


When we think about big data, we usually think about the web: the billions of users of social media, the sensors on millions of mobile phones, the thousands of contributions to Wikipedia, and so forth. Due to recent innovations, web-scale data can now also come from a camera pointed at a small, but extremely complex object: the brain. New progress in distributed computing is changing how neuroscientists work with the resulting data — and may, in the process, change how we think about computation. Read more…

Comments: 3

In search of a model for modeling intelligence

True artificial intelligence will require rich models that incorporate real-world phenomena.


An orrery, a runnable model of the solar system that allows us to make predictions. Photo: Wikimedia Commons.

Editor’s note: this post is part of our Intelligence Matters investigation.

In my last post, we saw that AI means a lot of things to a lot of people. These dueling definitions each have a deep history — ok fine, baggage — that has massed and layered over time. While they’re all legitimate, they share a common weakness: each one can apply perfectly well to a system that is not particularly intelligent. As just one example, the chatbot that was recently touted as having passed the Turing test is certainly an interlocutor (of sorts), but it was widely criticized as not containing any significant intelligence.

Let’s ask a different question instead: What criteria must any system meet in order to achieve intelligence — whether an animal, a smart robot, a big-data cruncher, or something else entirely? Read more…

Comments: 7

AI’s dueling definitions

Why my understanding of AI is different from yours.


SoftBank’s Pepper, a humanoid robot that takes its surroundings into consideration.

Editor’s note: this post is part of our Intelligence Matters investigation.

Let me start with a secret: I feel self-conscious when I use the terms “AI” and “artificial intelligence.” Sometimes, I’m downright embarrassed by them.

Before I get into why, though, answer this question: what pops into your head when you hear the phrase artificial intelligence?

For the layperson, AI might still conjure HAL’s unblinking red eye, and all the misfortune that ensued when he became so tragically confused. Others jump to the replicants of Blade Runner or more recent movie robots. Those who have been around the field for some time, though, might instead remember the “old days” of AI — whether with nostalgia or a shudder — when intelligence was thought to primarily involve logical reasoning, and truly intelligent machines seemed just a summer’s work away. And for those steeped in today’s big-data-obsessed tech industry, “AI” can seem like nothing more than a high-falutin’ synonym for the machine-learning and predictive-analytics algorithms that are already hard at work optimizing and personalizing the ads we see and the offers we get — it’s the term that gets trotted out when we want to put a high sheen on things. Read more…

Comments: 6

Untapped opportunities in AI

Some of AI's viable approaches lie outside the organizational boundaries of Google and other large Internet companies.

Editor’s note: this post is part of an ongoing series exploring developments in artificial intelligence.

Here’s a simple recipe for solving crazy-hard problems with machine intelligence. First, collect huge amounts of training data — probably more than anyone thought sensible or even possible a decade ago. Second, massage and preprocess that data so the key relationships it contains are easily accessible (the jargon here is “feature engineering”). Finally, feed the result into ludicrously high-performance, parallelized implementations of pretty standard machine-learning methods like logistic regression, deep neural networks, and k-means clustering (don’t worry if those names don’t mean anything to you — the point is that they’re widely available in high-quality open source packages).

Google pioneered this formula, applying it to ad placement, machine translation, spam filtering, YouTube recommendations, and even the self-driving car — creating billions of dollars of value in the process. The surprising thing is that Google isn’t made of magic. Instead, mirroring Bruce Scheneier’s surprised conclusion about the NSA in the wake of the Snowden revelations, “its tools are no different from what we have in our world; it’s just better funded.” Read more…


“It works like the brain.” So?

There are many ways a system can be like the brain, but only a fraction of these will prove important.

Editor’s note: this post is part of an ongoing series exploring developments in artificial intelligence.

Here’s a fun drinking game: take a shot every time you find a news article or blog post that describes a new AI system as working or thinking “like the brain.” Here are a few to start you off with a nice buzz; if your reading habits are anything like mine, you’ll never be sober again. Once you start looking for this phrase, you’ll see it everywhere — I think it’s the defining laziness of AI journalism and marketing.

Surely these claims can’t all be true? After all, the brain is an incredibly complex and specific structure, forged in the relentless pressure of millions of years of evolution to be organized just so. We may have a lot of outstanding questions about how it works, but work a certain way it must. Read more…

Comments: 14