The perils of automation complacency

Profile picture for user kmarko By Kurt Marko July 5, 2016
Summary:
Automation complacency is emerging as a risk in the rush towards letting technology take over a variety of tasks. The recent car fatality involving a Tesla that was on autopilot is an extreme example but there are plenty of others.

chat-bot-robot
Airline pilots are warned about it; Tesla drivers are learning it the hard way and business professionals must be wary of it: an over reliance on automaton.

Software automation has been a significant benefit to productivity, product quality, service consistency and business innovation. By taking over repetitive, mundane tasks, or rapidly combing through immense amounts of data, automated systems free people to spend time on creative innovation, big picture thinking and gleaning insight from snippets of seemingly unrelated information.

Until recently, one might have added enhanced safety to the litany of benefits flowing from software automation. However there are disquieting signs that over-reliance on software that is almost always right causes people to disengage from the task at hand, often with tragic results. Indeed, it's the edge cases and rare events where automation breaks down that demonstrates the wisdom of not ceding total decision authority to imperfect algorithms.

Automated machines have improved safety in manufacturing and mining industries. However in areas where automation serves more as a human convenience than life-saving replacement, what experts call automation complacency can lead to cognitive detachment by letting the computer do the thinking and, increasingly taking action, even when it's wrong. Although business executives seldom face the split-second, life-and-death decisions like those that can occur in the aircraft cockpit, they can learn from the occasionally fatal errors made in other domains by overreliance on automation and algorithms that are either not fully formed or where there is no reliable replacement for human expertise and gut instinct.

Autopilots and skill atrophy

Technology writer Nicholas Carr studied the phenomenon of automation complacency, making it a central theme of his last book, The Glass Cage. The recent news of the first recorded fatality due to Tesla's 'autopilot' mode prompted him to write about the condition and summarize relevant points and early warning signs of over-reliance on automation from the airline industry.

The overall decline in the number of plane crashes masks the recent arrival of “a spectacularly new type of accident," says Raja Parasuraman, a psychology professor at George Mason University and one of the world’s leading authorities on automation. When onboard computer systems fail to work as intended or other unexpected problems arise during a flight, pilots are forced to take manual control of the plane. Thrust abruptly into a now rare role, they too often make mistakes. The consequences, as the 2009 Continental Connection and Air France disasters show, can be catastrophic. Over the last thirty years, dozens of psychologists, engineers, and human factors researchers have studied what’s gained and lost when pilots share the work of flying with software. They’ve learned that a heavy reliance on computer automation can erode pilots’ expertise, dull their reflexes, and diminish their attentiveness, leading to what Jan Noyes, a human-factors expert at Britain’s University of Bristol, calls 'a deskilling of the crew.'

Automated business technology run amok

While less tragic than a fatal plane or car accident, the technology world has recently seen its share of software-induced mistakes causing significant financial or reputational harm. For Target and Home Depot, it resulted in being overwhelmed by automated security scans to the point of ignoring actual breaches. At Microsoft, unchecked automation let a presumably intelligent software bot to learn conversation on Twitter, becoming a racist misogynist in the process. In each case, the limits of AI and software automation become apparent when systems interact with often unpredictable events and occasionally malicious actors.

The Tesla, Target and Microsoft cases illustrate three risks of extreme automation. In order, they are:

  1. Complacency born of the system being almost always right, leading to human inattention and surprise when it erroneously handles rare or unexpected events. In this case, the automation can do damage that, depending on how fast the situation unravels, may be irrevocable. Unfortunately for IT, the chain of events can move very quickly, such as when an errant trading algorithm went wild and lost Knight Capital Group $440 million in 30 minutes, leading to its demise as an independent company.
  2. Data overload that leaves people unable to detect meaningful information buried in a stream of noise or willfully ignoring valid warnings amidst a stream of false positives. Target was the victim of this scenario, and while the ultimate cause was probably a mix of improperly configured and tuned security software and negligent or and overworked network engineers, the sorry result was an automated system that failed to provide the protection Target spent millions of dollars procuring.
  3. Unsupervised use of a self-training algorithm that ends up optimizing around the wrong set of inputs with no boundaries on what the software considers an acceptable outcome or behavior. Here we find Microsoft's chatbot the latest example of AI run amok that's been the theme of so many dystopian sci-fi movies. While a rude chatbot is a far cry from the maniacal HAL 9000, the prospect of unsupervised algorithms learning the wrong lessons and optimizing around undesirable outcomes is a genuine risk of un- or minimally-attended use of automated, adaptive software.

My take

While Carr's conclusion is made with vehicle control systems in mind, I believe he also provides a valid warning for business executives and technology professionals,

What the aviation industry has discovered is that there’s a tradeoff between computer automation and human skill and attentiveness. Getting the balance right is exceedingly tricky. Just because some degree of automation is good, that doesn’t mean that more automation is necessarily better. We seem fated to learn this hard lesson once again with the even trickier process of automotive automation.

Ceding greater control to software automation comes with a risk. Whether it's increasingly sophisticated BI systems that can make business decisions or software to automate and accelerate business processes and transactions, business leaders and technical professionals can't let their skills atrophy in the face of algorithms that usually get it right, but fail in the most spectacular of ways. The consequences of automation complacency are rarely fatal, but as organizations from Wall Street to retailing have discovered,  they can be financially and reputationally ruinous. Caveat Utilitor.

Image credit - Robot customer service operator with headset and speech bubbles © kirill_makarov - Fotolia.com