Published in News

AI is still half baked

by on09 February 2018


Deep learning is shallower and more hyped than Italian television

AI is set to be a massive disappointment for those who think it is going to take over the world.

While there have been remarkable advances in AI, after decades of frustration there are too many things that people can do quickly that smart machines cannot.

For example, natural language is beyond deep learning, sure AI machine translators are great tools, but they are leagues behind a competent human translator and will remain that way for decades. AI can’t handle new situations.

Senior partner at Flagship Pioneering, a firm in Boston that creates, builds, and funds companies that solve problems in health, food, and sustainability Jason Pontin has written in Wired that AI is good at a few things but terrible at others.

“Deep learning’s advances are the product of pattern recognition: neural networks memorise classes of things and more-or-less reliably know when they encounter them again. But almost all the interesting problems in cognition aren’t classification problems at all.”

Google researcher François Chollet said that people naively believe that if you take deep learning and scale it 100 times more layers, and add 1000 times more data, a neural net will be able to do anything a human being can do… But that’s just not true.

Gary Marcus, a professor of cognitive psychology at NYU and briefly director of Uber’s AI lab, recently published a trilogy of essays blasting deep learning.

He said that deep learning was not “a universal solvent, but one tool among many”. And without new approaches, Marcus worries that AI is rushing toward a wall, beyond which lie all the problems that pattern recognition cannot solve.

Deep learning is greedy, brittle, opaque, and shallow. The systems are greedy because they demand broad sets of training data. Brittle because when a neural net is given a “transfer test”—confronted with scenarios that differ from the examples used in training—it cannot contextualise the situation and frequently breaks.

Unlike traditional programs with their formal, debuggable code, the parameters of neural networks can only be interpreted using their weights within mathematical geography. Consequently, they are black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases. Finally, they are shallow because they are programmed with little innate knowledge and possess no common sense about the world or human psychology.

Pedro Domingos, a professor of computer science at the University of Washington said that a self-driving car could drive millions of miles, but it will eventually encounter something new for which it has no experience. Of course a driver in Rome or Sofia encounters these random events every ten minutes so we suspect the AI driving unit would explode.

The theory is that humans might have a better learning algorithm in our heads than anything we’ve come up with for machines.

Last modified on 09 February 2018
Rate this item
(0 votes)

Read more about: