Why is AI harder than we think?

Neil Raden Profile picture for user Neil Raden July 16, 2021
Summary:
A new paper explains why AI is harder than we think - and predicts another AI Winter on the horizon. How well does the argument hold up? Let's delve in.

businessman-in-gap

"Why AI is harder than we think" - that's the title of a recent paper by Melanie Mitchell at the Santa Fe Institute. 

The paper is in two parts. The first explores the history of AI, beginning with McCarthy in 1957 to the present. She explains that the AI process has been instead punctuated, and she explains in a way that seems plausible. AI always seems to burst out with dramatic advances, then doesn't live up to the hype, and hibernates for a few years, the so-called AI Winter.

Mitchell predicts that we're heading into another AI Winter because the hype over AI is beyond its capabilities. She goes into detail about problems with "Shortcut Learning," which I'll address below after describing the second half of the paper, which covers what she calls the four fallacies of AI:

  1. Narrow intelligence is on a continuum with general intelligence: Advances in narrow AI aren't "first steps" toward AGI (Artificial General Intelligence) because they still lack common-sense knowledge. Mitchell questions that assertion.
  2. Easy things are easy, and hard things are hard: Actually, the tasks that are easy for most humans are often hard to replicate in machines.
  3. "The lure of wishful mnemonics": Names used in the AI field, such as "Stanford Question Answering Dataset" for a question-answering benchmark, give off the impression that AI programs that do well at a benchmark are doing the underlying task that the benchmark is designed to approximate, even though that task requires general intelligence.
  4. Intelligence is all in the brain: Here, Mitchell questions the common assumption that "intelligence can in principle be 'disembodied,'" or separated conceptually from the rest of the organism it occupies because it is simply a form of information processing. Instead, evidence from neuroscience, psychology, and other disciplines suggests that human cognition is deeply integrated with the rest of the nervous system.

There are insidious problems with AI beyond the issues of bias and bad data. It starts with a lack of understanding of how AI works. For example, have you ever heard of a digital technology that didn't have "bugs"? AI does, but they tend to be sort of inscrutable. How do you find them? If you're lucky, you will notice errant predictions or classifications that are conflicting or irrational. Other times you won't notice them at all. One approach that you should not take is trial-and-error. It's better to understand what is happening in your models.

Here is an example of wasting time and money instead of using the proper diagnostic equipment to draw an analogy. Suddenly the dashboard lit up in my car, and it went into limp mode, meaning something was wrong with the engine, and it would not go faster than 600RPM. I tried an old trick with these hyper-computerized cars of disconnecting the negative terminal and letting it sit for ten minutes, hoping that it would reset whatever code was causing it to limp when I reconnected. I didn't work. I limped home and put it in the garage. The first thing I noticed was that the bank of six cylinders on the right side was not functioning, and the bank on the left side was operating at a normal range. My first thought was to swap the "brains," the two twin engine management computers, to see if I could reproduce the problem on the other side. It did. Problem solved. One of the computers was faulty. I ordered a new one, about $700. When it came, I installed it and, nothing changed. It must be the spark plug wires. A new set for this engine was over $800. Nothing changed.

 It had been almost a month of frustration, so I decided just to take a look at everything. I started by looking closely at the right side turbocharger, and I noticed a small crack in the plastic charge pipe that takes the boosted air from the turbocharger and the intercooler into the throttle assembly. My OBDII Scanner did not have current codes for this model. I ordered one from the dealer. ($1200). I plugged it in and, voila, the reduced boost pressure through a code that I didn't see. 

The moral of the story is that I didn't solve the problem until the fifth trial-and-error try because I didn't understand what was going on to cause this cascade of failures, and I didn't have the right tool to diagnose it. 

The point of this long-winded story is that AI, including machine learning and deep learning, involves the development of algorithms that create predictive models from data. We all know this. But perhaps not so well understood is that it is typically inspired by statistics rather than by neuroscience or psychology. The goal is to perform specific tasks rather than capturing general intelligence. It is vital to understand precisely what the model can tell you and why. This absence of fundamental understanding in ML and Deep Learning can cause unpredictable errors when facing situations that differ from the training data. Machine Learning, Neural Nets and Deep Learning do not learn the concepts. But instead, they know shortcuts to connect answers on the training set. This is because they are susceptible to "Shortcut Learning," statistical associations in the training data that allow the model to produce incorrect answers - or even correct answers for the wrong reasons.

One of the most famous examples of this phenomenon was identifying skin cancer, but the number of false positives indicated there was something wrong with the model. Upon investigation, it was determined that dermatologists always use a standard ruler to measure the size of the lesion. If it is greater than 3 cm, they choose to do a biopsy. Shortcut Learning led the model to assume every picture with a ruler was malignant. Or another example was: training new deep learning nets to distinguish between cats and dogs. When it began to pick out some cats as dogs, it became apparent that it associated leashes with dogs in the training data, and in those cases where cats had leashes, it lumped them into the dog category.

Both of these are examples of Shortcut Learning. The algorithm is busy solving a million differential equations and finding the shortest path to the cost function. How do you solve the problem? The first step, unlike my trial-and-error approach, is to investigate what is happening. Some tools can help you. One of the most popular at the moment is SHAP. It's a concept from game theory, but it's been adapted to expose which variable has the most effect on the outcome. Putting it another way, machine learning algorithms will only learn what you want if it is the easiest thing to do to maximize its metrics. 

My take

I would tend to disagree that we are heading for another AI Winter. I believe it is just the opposite. Money pouring into AI allows for rapid progress. Besides, the US has issued numerous reports and formed committees in a race to beat the Chinese. Look no further than the devastating effect of ransomware to see that cybersecurity (clearly AI-driven) and autonomous/lethal weapons will drive boundless research to advance AI language, vision, speech,  location assessment and a million other things.

We can hope that some of this will trickle down to make data management more effective, and the packaging of vetted algorithms for the private sector to make the AI life cycle easier. And no, I don't have a twin-turbo V-12 car anymore. That was a silly extravagance.

Loading
A grey colored placeholder image