When AI algorithms fail, who you gonna call?

Profile picture for user denis_pombriant By Denis Pombriant January 13, 2017
Summary:
AI's dirty little secret is that when algorithms fail, humans still have to sort out the mess. Denis Pombriant gives us some robot food for thought.

Toy robot in front of blurred keyboard, code © Patrick Daxenbichler - Fotolia.com

Two Microsoft researchers may have blown the lid off a secret, or at least an assumption, that most of us have about artificial intelligence (AI), with serious repercussions for how we think about this emerging technology and its use in everyday life. In their Harvard Business Review article The Humans Working Behind the AI Curtain, Mary L. Gray and Siddharth Suri reveal that when the going gets tough in AI, humans do what algorithms can’t.

This ought not be a surprise – after all, machine learning can only go so far. When, for example, a truly unique situation pops up, it makes perfect sense that the first learner should be a human. The article gives several examples of such unique situations.

For example, what exactly did Mitt Romney mean during a presidential debate during the 2012 election when he said he had “binders full of women?” You and I know, but if you're an algorithm curating content, what do you do with that? Well, the algorithms blanched and someone had to make sense of the statement and that was indeed a human. As the authors state in their piece:

Much of the crowdwork done on contract today covers for AI when it can’t do something on its own. The dirty little secret of many services — from FacebookM to the ‘automatic’ removal of heinous videos on YouTube, as well as many others — is that real live human beings clean up much of the web, behind the scenes.

…and…

Those magical bots responding to your tweets complaining about your delayed pizza delivery or the service on your flight back to Boston? They are the new world of contract labor hidden underneath a layer of AI.

There is absolutely nothing wrong with this and it simply proves that there are many things that require real thinking that humans can do effortlessly but that machines still fall short of accomplishing. Parsing human speech is one of the toughest challenges, along with identifying faces, that human brains do with relative ease and that algorithms still come up short doing. So is driving a car.

The researchers are well credentialed. According to her bio, Mary L. Gray is a Fellow at Harvard University’s Berkman Klein Center for Internet and Society. She is also a senior researcher at Microsoft Research, New England, and maintains a faculty position at Indiana University. Siddharth Suri is a senior researcher at Microsoft Research, New York City.

So what’s the problem?

So if there’s nothing wrong with this approach to training AI, and I am in that camp, what’s the big deal? The problems I see ahead are several:

  • Can we ever really trust our algorithms to get it right?
  • What happens to all of the wailing and gnashing of teeth that’s popular right now over job losses to automation?
  • The down side is that in other circumstances, like driverless cars, people will get killed — but different people perhaps (and fewer) than would have been killed anyway by human drivers.

Taking each of these in order, I’ll start by asserting that algorithms will, through training, improve significantly, while in contrast humans might plateau. There’s plenty of evidence of humans plateauing in the work of psychologists Daniel Kahneman and Amos Tversky. They studied heuristics, or rules of thumb, that people use to make decisions instead of thinking something through. Kahneman and Tversky exposed the prevailing economic fallacy in the late 20th century that we’re all rational and we make decisions for rational reasons.

That fallacy had been the basis of economic thinking for a long time even though experience showed time and again that human decision-making is fraught with error based on biases driven by these subtle, unconscious heuristics. So relying on our algorithms might free us somewhat from relying on our heuristics but they’ll introduce the problems in my third point, which we’ll come back to in a moment.

My second point on automation reveals something else. I’ve said here for a while that automation will take over some of the drudgery, leaving people more available for doing things that we do well. But lately all we hear about are job losses and 'This time it’s different' – as in, there won’t be enough new jobs generated by new technologies to fill the void and we’ll all be replaced by machines. Please. As this article clearly shows, the most effective way to work with machines is to work in tandem leveraging the strengths of technology as well as people.

The third point is more serious. It’s quite possible that algorithms intended to save lives will kill some people, but they will be different people. For instance, the driverless car industry wants us to believe that their cars will be safer than those driven by humans and have far fewer accidents. I believe that. But the small number of driverless car accidents that we’ve seen so far has been the result of algorithms not being trained for a unique (that word again) situation that arose and the machine became clueless about how to deal with it.

There were just over 35,000 highway deaths in the U.S. in 2015, down from 53,000 deaths in 1969 so there’s been significant improvement though I would have expected more of a decline over almost 50 years. But the day could come when a few hundred deaths in a year seems like a lot and with algorithm-driven cars we’ll be asking if the algorithms are at fault.

My take

As the HBR story suggests, training algorithms is not a perfect science and some human intervention is going to be needed for the foreseeable future. A human in a situation controlled by an algorithm, such as a driverless car, is a passive participant and any harm to that person would be difficult to justify. In contrast, a human who gets behind the wheel is an actor and we are more able to process an accident as an act of commission by placing the fault with the driver.

For all of the brilliant work being done building and deploying smart machines, we haven’t done much to contemplate how they change our view of the world. That’s typical of new technologies for the simple reason that we need experience to form ideas, just like machine-learning algorithms. But the time to begin the gathering process is now, rather than accepting algorithm-driven technology as unalloyed good.