If your job can be easily defined, then it can – and will – be automated.
That was the stark message from the recent Japan-UK Robotics and AI seminar at the Japanese Embassy in London. The event brought together experts from multiple disciplines – including Japanese robotics legend Prof. Hiroshi Ishiguro – to share knowledge and foster collaboration between the two countries.
So if it’s true that a job’s ‘definability’ is the same as its ability to be automated, then robotic software, hardware robots, and AI pose a much greater challenge to human society than most people realise: it won’t just be the robotic ATM or checkout serving you, but also robot nurses, doctors, care assistants, and schoolteachers: skilled jobs requiring human intuition, intelligence, and empathy.
That robots might take on simple rules-based, repetitive, or low-skilled roles is nothing new, but in a world of escalating compliance, what job today isn’t based on rules? And with autonomous vehicles and drones thrown into the mix, then yet more jobs may cease to be carried out by human beings. Or so the experts believe.
Let’s take their claim at face value. The risk of social upheaval is one reason why robotics research is increasingly taking place in multi-disciplinary teams: not only of computer scientists and engineers, but also of psychologists, cultural theorists, and cognitive researchers. Robotics is no longer just about scaling a great technology Everest just because it’s there.
Living with robots
Dr Anders Sandberg is James Martin Research Fellow at Oxford University’s Future of Humanity Institute. He believes that nearly half of all jobs (47%) will be automated, with those that can be easily described being the easiest to hand over to machines.
Mass unemployment aside, Sandberg acknowledged that there are serious ethical and societal problems with this idea – not to mention technical obstacles. First comes the challenge of establishing robot-machine empathy and morality in a world in which “the network doesn’t care” and we can barely explain human values to each other, let alone to machines.
Sandberg said (for brevity’s sake, I’ve condensed his comments):
Robots have to navigate a human-shaped world and understand human intentions. [...] It’s easier to make an efficient robot than a [morally] good robot. […] A law-abiding robot is not the same as an ethical robot. […] Intelligence and values are not the same thing.
Kerstin Dautenhahn, Professor of Artificial Intelligence at the University of Hertfordshire, minimises these risks by exploring how robots can integrate with human beings “in a socially acceptable manner” as care assistants and companions. Lab work can only achieve so much, so the University has purchased a “typically British house” and filled it with robots to explore the ways in which vulnerable people can feel at ease when living among autonomous machines.
Dautenhahn’s main research is into home assistance robots for elderly residents in smart homes, and the use of therapeutic educational machines for children who have autism, via the University’s KASPAR social robot.
Similar programmes have demonstrated that children on the autism spectrum respond remarkably well to humanoid robots. So in this sense, any lingering preconceptions we might have of blank, emotionless machines are misplaced: incredible though it may seem, robots are actively helping to teach autistic children how to understand, respond to, and express human emotions.
At the same time, harnessing the power of cognitive research is helping machines to learn good behaviour from their daily interactions with the rest of us, too. But are robots being programmed to be more like humans, or is the reality that it is we who are being programmed to feel protective of the machines?
Telepresence robots and tele-robotics are an established tech hotspot, but Prof. Hiroshi Ishiguro has become a global robotics icon by creating an android of himself, which he sends to conferences to give presentations on his behalf.
He’s also the man behind ‘Erica’, which he calls “the most beautiful and intelligent android in the world”. However, Ishiguro revealed that the female android, which can have natural conversations with humans incorporating body language and non-verbal cues, is essentially a fake: behind ‘her’ are 10 separate computers.
So why is Ishiguro focusing on the ‘uncanny valley’ of machines that look exactly like humans – as opposed to those anthropomorphic humanoids such as NAO, Pepper, and Robi, which remain visibly machine-like? The point is the interface, he explained, to find out how humans respond to a machine that looks just like them:
The ideal interface for a human being is another human being. […] The android is the fundamental testbed for understanding humanoid (sic) behaviour.
He explained that his immediate plan is to use cognitive research to develop humanlike robots that have real intentions and desires, and to “archive a human” in android form. However, Ishiguro’s long-term aim is not to create a race of machine humans, but to compile all of the findings of his current research into a more machine-like form – more like the cute humanoid robots, such as Robi, that are designed by his counterpart Prof. Tomotaka Takahashi, founder and CEO of the Robo-Garage at Kyoto University.
Ultimately, Ishiguro’s work is to create what he calls a “robot society supported by humanoids and androids”.
Pressed on this, he made the points that human beings are already acting more like machines out of personal choice, via text and chat on their smartphones, and that we don’t regard human beings who rely completely on technology to move, replace missing limbs, or communicate, as being any less than 100% human.
But how far away is the intelligent, autonomous machine from a century of sci-fi lore?
Take me to your Lida
Dr. Fumiya Lida of the Bio-Inspired Robotics Lab at Cambridge University’s Department of Engineering, believes that robotics are moving towards a world of “embodied intelligence” as AI and robotics come together over the next 20 years. Robots will not only learn to be more sensitive and responsive to human needs, he said, but also more creative and autonomous.
Dr. Komei Sugiura, Senior Researcher at Japan’s National Institute of Communications Technology (NICT), explained how robotics are already available as on-demand cloud services, via multilingual speech-recognition and synthesis engines, such as Rospeex, which are designed to facilitate human-robot dialogue.
But true language learning – rather than repeating pre-programmed phrases, as most commercially available robots do – remains a challenge, explained Tadahiro Taniguchi, Associate Professor at the College of Information Science and Engineering, at Japan’s Ritsumeikan University.
Taniguchi is exploring how robots can learn languages with no fore-knowledge of them, relate spoken words to objects and events, and understand both the differences and relationships between written words, phonemes, and “voice signals” (sounds). Those days are not far off, he said:
Unsupervised machine learning is gradually becoming a solvable problem.
Meanwhile, Prof. Sethu Vijayakumar, Director of the Centre for Robotics at Edinburgh University, stressed that his research is moving from teleoperation to autonomy – robots that interact with human beings based on their own acquired knowledge. But the balance of power should always remain with humans, he said:
We should be looking at collaboration that includes different levels of autonomy at different moments, to reduce human workloads while leaving the human in control.
Arthur C Clarke once observed that we tend to overestimate a technology’s impact in the short term, but underestimate it in the long term.
It’s clear that the widespread uptake of robotics is approaching far more quickly than most people realise. Yet how society reacts to that will be down to some less predictable things, not least of which how astute the technologists’ understanding of human nature is.
As Oxford University’s Sandberg acknowledged in his presentation:
Even experts in AI are very bad at predicting the future. They generate lots of data, but are very bad at making predictions. So we should expect surprises and incorporate that into our thinking about society. Very advanced systems may behave in unpredictable ways.
Never forget: human beings are advanced systems too.