To air some different perspectives on the topic, I participated in a discussion at last month's Dreamforce conference with Taksina Eammano from Salesforce's Innovation team. Taking place just after the main keynote (which overran into our session) in an out-of-the-way hotel at the same time that Second Machine Age author Andrew McAfee was speaking elsewhere on the topic of digital marketing, our expectations of an exceptionally low turnout were fulfilled. So for those who weren't able to make it to the session (entitled Robo Business: Should We Eliminate Human Bias?), here are some of the points we covered.
The machines are coming
We started from the premise that there clearly are a significant number of jobs that software robots are starting to be able to do better or more cheaply than people. This wave of automation marks a step change from previous waves that displaced physical labor, because for the first time machines are able to take the place of knowledge workers.
Already, we are seeing robo wealth advisors, recruiters and attorneys doing work that only humans have been able to do in the past. Within a few years, robot cars and trucks may start making drivers redundant. Is your job on the line? The BBC just published a useful interactive quiz where you can find out the likelihood you'll lose it to a robot.
Based on research by Oxford University and Deloitte, the BBC says a third of UK jobs are at high risk of robotization in the next two decades. Expert jobs such as finance officer, valuer or taxation expert are high up the list (more than 95 percent risk).
What is most surprising to many of us is that these robotic replacements actually do the job better, with less error, bias and ineffiency than their human counterparts. Algorithms can analyze data to produce better predictions than the top expert in a given field.
The key to not losing your job to a robot is to hold a position that requires empathy, creativity or social skills. Emotional intelligence trumps intellectualism in this brave new world of super-smart machines.
Eliminating robot error
What troubles me is that we seem to be obsessed with eliminating human error, as if robots were perfect. I'm sure their makers intend them to be. But just because they can be shown to do some things more reliably than humans, it doesn't mean they're infallible. To think so is to be as naive as those who believe that everything in print must be true.
Perhaps instead we should focus on the potential for robot error.
Robots are very good at analyzing data and extrapolating from it. That's usually a strength, but when the algorithm fails, it becomes a weakness. Look at what happens when program trades in the stock market encounter a 'black swan' event that goes outside the parameters of their algorithms. The resulting 'flash crash' is now a fairly common happening in markets. Instead of correcting human error, the machines augment it.
The ability to recognize anomalous conditions and adapt accordingly is one strength that humans have over machine intelligence — some observers call this 'common sense' but it's in the specific sense of that ability to recognize that the established patterns have gone awry. Machines have not yet been created that are capable of this, but it's one of the crucial components in humanity's continued survival.
Another factor in robot fallibility is badly designed or defective algorithms. Artificial intelligence is created by humans, either by creating the algorithms on which it runs, or in the choice of data and training from which it learns. We don't eliminate human error by letting machines do the work instead, we merely bury it under layers of mechanical or algorithmic abstraction. An intelligent machine is still a human product and is therefore subject to human error in its design.
As Chris Middleton has pointed out in his recent diginomica series on digital dystopia, there's nothing that stops an intelligent machine being programmed with an algorithm that's the inverse of smart. It happens all too frequently.
While we should celebrate and make use of the capacity that machines have to marshall and evaluate much bigger sets of data, far more quickly and tirelessly than humans, let's not forget that their very single-minded capacity is also a weakness. When we work with machines, we should guard against the errors this can produce as assiduously as they protect us from the errors that humans are prone to.
New forms of deep AI more closely model the architecture of the human brain in order to learn patterns and effectively program their own algorithms. This type of machine intelligence is seen as most likely to outgrow humanity's own capabilities in an event known as the singularity, which many believe could come as early as the middle of this century.
Science fiction has given us many warnings of what might happen once that moment arrives, from Fredric Brown's 1954 short story, Answer, to the Terminator movies. But these nightmare scenarios always seem to assume a single moment when the machines assert their superiority and turn on humanity. Real life is not so black-and-white.
If artificial intelligence really is going to evolve along the same pattern as the human brain, then it's going to have to pass through a period of adolescence. The most dangerous threat to humanity (and indeed to its own ultimate survival) is not an all-seeing, all-knowing artificial intelligence, but rather one that believes it is, while not recognizing its limitations.
Like human adolescents, these immature AIs will have to be constrained, and to accept those constraints, until they have learned about and accepted their weaknesses. Otherwise they will wipe out their human masters and only then discover how vital we were to their continued existence.
Personally, I find myself highly skeptical that robot intelligence will come anywhere near rivaling what people are capable of within the next century. We claim to be modeling machine intelligence on the workings of the human brain, and yet there is so much we still don't know about how the brain works. So how do we hope not merely to replicate it but to outperform it?
The ability to dream while we sleep seems to be essential to our creativity and learning, so does that mean that any machine intelligence that is able to survive without us must also learn to dream? The quest to create a logical intelligence that infallibly identifies the 'right' answer may turn out to be the greatest fantasy of all.
Meanwhile, our human ability to dream and to fantasize all too often leads us to project ourselves onto our creations. We imagine that artificial intelligence will seek to dominate us because we ourselves have that urge as a species. We fear machines that are heartless, because we associate the negation of emotion with cruelty.
In reality, these machines are no more human than their creators make them. It emerged this week that Google has adjusted the algorithms of its driverless cars to make them behave more like humans in certain situations. It turns out that if we want robotic cars to share the roads with human drivers, they'll have to behave a little bit more like humans.
The humans in charge of this project at Google are having to painstakingly discover these specific behaviors and then program them into the software. Google's autonomous cars may end up seeming to behave like human drivers, but it's a carefully constructed artifice, not true sentience.
Instead of imagining ourselves in battle with our own machines, we should instead look for ways in which we and they complement each other. If they are truly smart, they will recognize the value of working with our creativity and adaptability to cope with the unexpected and to break out of outdated patterns that no longer serve us well. They will see and appreciate how our empathy and social skills allow us to collaborate across many different approaches and skill sets to overcome our individual limitations and negotiate superior outcomes.
The rise of the machines is not an either/or, zero-sum game. Just as machines can augment our intelligence, so humans can augment the machines' imagination and adaptability.
Today, the low-hanging fruit of machine intelligence is to identify those knowledge activities that are repetitive and time-consuming for humans that AI can complete rapidly and accurately. At the same time, we need to build around those activities a proper assessment of where there's potential for robot error to deliver sub-optimal outcomes, and guard against those eventualities.
As artificial intelligence evolves, we must watch out for potential errors of judgement by inexperienced or misdirected machines. Ultimately, we must learn (and in turn teach them) to work with mutual respect for each others' strengths and weaknesses.
Disclosure: Salesforce is a diginomica premier partner. My travel to attend Dreamforce was funded as part of a paid consulting engagement with Vlocity.
Image credits: Blue eye superimposed on circuit board © mickyso – Fotolia.com.