Artificial intelligence may never match the brain
The problem is that deep learning has no way of checking its deductions against “common sense,” and so can make ridiculous errors. It is, say Marcus and Davis, “a kind of idiot savant, with miraculous perceptual abilities, but very little overall comprehension.” In image -classification, not only can this shortcoming lead to absurd results but the system can also be fooled by carefully constructed “adversarial” examples. Pixels can be rejigged in ways that look to us indistinguishable from the original but which AI confidently garbles, so that a van or a puppy is declared an ostrich. By the same token, images can be constructed from what looks to the human eye like random pixels but which AI will identify as an armadillo or a peacock.
These blind spots become particularly troubling when AI slavishly recreates human biases—for example, when camera image-processors insist that someone with east Asian eyes must have “blinked.” Mitchell, as well as Marcus and Davis, warn that the dangers of AI are not about Skynet-style robot takeovers but unthinking applications of inadequate systems. Even if an AI system performs well 99 per cent of the time, the occasional failure could be catastrophic, especially if it is being used to drive a car or make a medical diagnosis.
The trouble is, though, it’s not obvious how to do better. These authors argue—and it’s a view widely held among AI researchers—that we need to make systems that think more like humans. But what does that mean?