Main content

Why humans will always be smarter than artificial intelligence

Phil Wainewright Profile picture for user pwainewright February 15, 2018
Machines are good at pattern matching but our ability to learn unlimited patterns means humans will always be smarter than artificial intelligence

Toy robot in front of blurred keyboard, code © Patrick Daxenbichler -
Not for the first time in its history, artificial intelligence is rising on a tide of hype. Improvements to the technology have produced some apparently impressive advances in fields such as voice and image recognition, predictive pattern analysis and autonomous robotics. The problem is that people are extrapolating many unrealistic expectations from these initial successes, without recognizing the many constraints surrounding their achievements.

Machine intelligence is still pretty dumb, most of the time. It's far too early for the human race to throw in the towel.

People are misled by artificial intelligence because of a phenomenon known as the Eliza effect, named after a 1966 computer program that responded to people's typed statements in the manner of a psychotherapist. The computer was executing some very simple programming logic. But the people interacting with it ascribed emotional intelligence and empathy to its replies.

The same phenomenon happens today in our reactions to the apparent successes of machine learning and AI. We overestimate its achievements and underestimate our own performance because we rarely stop to think how much we already know. All of the context we bring to interpreting any situation is something we take for granted.

Machines are good at pattern matching

Computers are much better than us at only one thing — at matching known patterns. They can only match the patterns they have learned, and they have limited capacity to learn more than just a few patterns. Humans are optimized for learning unlimited patterns, and then selecting the patterns we need to apply to deal with whatever situation we find ourselves in. This is a skill that's been honed by millions of years of evolution.

This is why Buzzfeed writer Katie Notopoulos was able to crack Facebook's new algorithm and wind up her friends the other week. She successfully matched a pattern in a way that Facebook's algorithm couldn't fathom — as she explains, the Facebook machine doesn't really know what it's doing, the best it can do is to just try and match patterns that it's been told look like friendship:

This algorithm doesn’t understand friendship. It can fake it, but when we see Valentine’s Day posts on Instagram four days later, or when the machines mistake a tornado of angry comments for 'engagement', it’s a reminder that the machines still don't really get the basics of humanity.

This echoes Douglas Hofstadter's far more erudite takedown of AI for The Atlantic last month, The Shallowness of Google Translate. If you understand both French and English, then just savor for a moment this put-down of Google's translation skills:

Clearly Google Translate didn’t catch my meaning; it merely came out with a heap of bull. 'Il sortait simplement avec un tas de taureau.' 'He just went out with a pile of bulls.' 'Il vient de sortir avec un tas de taureaux.' Please pardon my French — or rather, Google Translate’s pseudo-French.

A takedown of Google Translate

Hofstadter is generous enough to acknowledge Google's achievement in building an engine capable of converting text between any of around 100 languages by coining the term 'bai-lingual' — "'bai' being Mandarin for 100" — yet thoroughly demolishes its claim to be performing anything truly intelligent:

The bailingual engine isn’t reading anything — not in the normal human sense of the verb 'to read'. It’s processing text. The symbols it’s processing are disconnected from experiences in the world. It has no memories on which to draw, no imagery, no understanding, no meaning residing behind the words it so rapidly flings around.

A friend asked me whether Google Translate’s level of skill isn’t merely a function of the program’s database. He figured that if you multiplied the database by a factor of, say, a million or a billion, eventually it would be able to translate anything thrown at it, and essentially perfectly. I don’t think so. Having ever more 'big data' won’t bring you any closer to understanding, since understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today.

Enterprises are constantly encountering the limitations of that lack of ideas in their quest to apply machine learning and artificial intelligence to today's business problems. Last year I listened to a presentation at the Twilio Signal conference in London by Sahil Dua, a back-end developer at He spoke about the work the travel reservation site has been doing with machine learning to autonomously tag images.

Of course we all know that the likes of Google, Amazon and Microsoft Azure already offer generic image tagging services. But the problem encountered was that those services don't tag images in a way that's useful in the context. They may identify attributes such as 'ocean', 'nature', 'apartment', but needs to know whether there's a sea view, is there a balcony and does it have a seating area, is there a bed in the room, what size is it, and so on. Dua and his colleagues have had to train the machines to work with a more detailed set of tags that matches their specific context.

Why humans will always be smarter than AI

This concept of context is one that is central to Hofstadter's lifetime of work to figure out AI. In a seminal 1995 essay he examines an earlier treatise on pattern recognition by Russian researcher Mikhail Bongard, a Russian researcher, and comes to the conclusion that perception goes beyond simply matching known patterns:

... in strong contrast to the usual tacit assumption that the quintessence of visual perception is the activity of dividing a complex scene into its separate constituent objects followed by the activity of attaching standard labels to the now-separated objects (ie, the identification of the component objects as members of various pre-established categories, such as 'car', 'dog', 'house', 'hammer', 'airplane', etc)

... perception is far more than the recognition of members of already-established categories — it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction.

For, those new categories could be defined in advance, but a more general-purpose AI would have to be capable of defining its own categories. That's a goal Hofstadter has spent six decades working towards, and is still not even close.

In her BuzzFeed article, Katie Notopoulos goes on to explain that this is not the first time that Facebook's recallbration of the algorithms driving its newsfeeds has resulted in anomalous behavior. Today, it's commenting on posts that leads to content being overpromoted. Back in the summer of 2016 it was people posting simple text posts. What's interesting is that the solution was not a new tweak to the algorithm. It was Facebook users who adjusted — people learned to post text posts and that made them less rare.

And that's always going to be the case. People will always be faster to adjust than computers, because that's what humans are optimized to do. Maybe sometime many years in the future, computers will catch up with humanity's ability to define new categories — but in the meantime, humans will have learned how to harness computing to augment their own native capabilities. That's why we will always stay smarter than AI.

A grey colored placeholder image